I1229 10:47:06.489284 9 e2e.go:224] Starting e2e run "8dc2c797-2a28-11ea-9252-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577616425 - Will randomize all specs Will run 201 of 2164 specs Dec 29 10:47:07.188: INFO: >>> kubeConfig: /root/.kube/config Dec 29 10:47:07.192: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 29 10:47:07.217: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 29 10:47:07.275: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 29 10:47:07.275: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 29 10:47:07.275: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 29 10:47:07.290: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 29 10:47:07.290: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 29 10:47:07.290: INFO: e2e test version: v1.13.12 Dec 29 10:47:07.293: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:47:07.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test Dec 29 10:47:07.524: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-25hgr STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 29 10:47:07.529: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 29 10:47:48.064: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-25hgr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 29 10:47:48.064: INFO: >>> kubeConfig: /root/.kube/config Dec 29 10:47:49.545: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:47:49.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-25hgr" for this suite. Dec 29 10:48:15.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:48:15.678: INFO: namespace: e2e-tests-pod-network-test-25hgr, resource: bindings, ignored listing per whitelist Dec 29 10:48:16.017: INFO: namespace e2e-tests-pod-network-test-25hgr deletion completed in 26.45565646s • [SLOW TEST:68.724 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:48:16.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-vmjb4 Dec 29 10:48:27.085: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-vmjb4 STEP: checking the pod's current state and verifying that restartCount is present Dec 29 10:48:27.091: INFO: Initial restart count of pod liveness-exec is 0 Dec 29 10:49:21.907: INFO: Restart count of pod e2e-tests-container-probe-vmjb4/liveness-exec is now 1 (54.816220846s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:49:22.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vmjb4" for this suite. Dec 29 10:49:30.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:49:30.316: INFO: namespace: e2e-tests-container-probe-vmjb4, resource: bindings, ignored listing per whitelist Dec 29 10:49:30.588: INFO: namespace e2e-tests-container-probe-vmjb4 deletion completed in 8.500182818s • [SLOW TEST:74.570 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:49:30.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 29 10:49:30.878: INFO: Number of nodes with available pods: 0 Dec 29 10:49:30.879: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:31.925: INFO: Number of nodes with available pods: 0 Dec 29 10:49:31.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:33.441: INFO: Number of nodes with available pods: 0 Dec 29 10:49:33.441: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:34.058: INFO: Number of nodes with available pods: 0 Dec 29 10:49:34.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:34.963: INFO: Number of nodes with available pods: 0 Dec 29 10:49:34.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:35.895: INFO: Number of nodes with available pods: 0 Dec 29 10:49:35.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:37.154: INFO: Number of nodes with available pods: 0 Dec 29 10:49:37.154: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:37.918: INFO: Number of nodes with available pods: 0 Dec 29 10:49:37.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:38.911: INFO: Number of nodes with available pods: 0 Dec 29 10:49:38.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:39.903: INFO: Number of nodes with available pods: 0 Dec 29 10:49:39.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 29 10:49:40.906: INFO: Number of nodes with available pods: 1 Dec 29 10:49:40.906: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 29 10:49:41.106: INFO: Number of nodes with available pods: 1 Dec 29 10:49:41.107: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tp645, will wait for the garbage collector to delete the pods Dec 29 10:49:42.688: INFO: Deleting DaemonSet.extensions daemon-set took: 109.476366ms Dec 29 10:49:45.289: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.601003337s Dec 29 10:49:49.062: INFO: Number of nodes with available pods: 0 Dec 29 10:49:49.062: INFO: Number of running nodes: 0, number of available pods: 0 Dec 29 10:49:49.077: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tp645/daemonsets","resourceVersion":"16447535"},"items":null} Dec 29 10:49:49.090: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tp645/pods","resourceVersion":"16447535"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:49:49.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tp645" for this suite. Dec 29 10:49:55.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:49:55.395: INFO: namespace: e2e-tests-daemonsets-tp645, resource: bindings, ignored listing per whitelist Dec 29 10:49:55.461: INFO: namespace e2e-tests-daemonsets-tp645 deletion completed in 6.318591479s • [SLOW TEST:24.872 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:49:55.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 29 10:49:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dcgx6' Dec 29 10:49:57.693: INFO: stderr: "" Dec 29 10:49:57.693: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 29 10:49:58.708: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:49:58.709: INFO: Found 0 / 1 Dec 29 10:49:59.714: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:49:59.714: INFO: Found 0 / 1 Dec 29 10:50:00.706: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:00.706: INFO: Found 0 / 1 Dec 29 10:50:01.748: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:01.748: INFO: Found 0 / 1 Dec 29 10:50:02.714: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:02.714: INFO: Found 0 / 1 Dec 29 10:50:03.709: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:03.709: INFO: Found 0 / 1 Dec 29 10:50:04.754: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:04.754: INFO: Found 0 / 1 Dec 29 10:50:05.705: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:05.705: INFO: Found 0 / 1 Dec 29 10:50:06.726: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:06.726: INFO: Found 1 / 1 Dec 29 10:50:06.726: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 29 10:50:06.732: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:06.732: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 29 10:50:06.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v68vc --namespace=e2e-tests-kubectl-dcgx6 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 29 10:50:06.844: INFO: stderr: "" Dec 29 10:50:06.844: INFO: stdout: "pod/redis-master-v68vc patched\n" STEP: checking annotations Dec 29 10:50:06.885: INFO: Selector matched 1 pods for map[app:redis] Dec 29 10:50:06.886: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:50:06.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dcgx6" for this suite. Dec 29 10:50:30.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:50:30.994: INFO: namespace: e2e-tests-kubectl-dcgx6, resource: bindings, ignored listing per whitelist Dec 29 10:50:31.217: INFO: namespace e2e-tests-kubectl-dcgx6 deletion completed in 24.322738176s • [SLOW TEST:35.756 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:50:31.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 29 10:50:31.467: INFO: Waiting up to 5m0s for pod "pod-089a7e94-2a29-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-9dk4m" to be "success or failure" Dec 29 10:50:31.491: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.661607ms Dec 29 10:50:33.512: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04509166s Dec 29 10:50:35.538: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070495944s Dec 29 10:50:37.598: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130756257s Dec 29 10:50:39.608: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.140862732s Dec 29 10:50:41.622: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15515061s STEP: Saw pod success Dec 29 10:50:41.623: INFO: Pod "pod-089a7e94-2a29-11ea-9252-0242ac110005" satisfied condition "success or failure" Dec 29 10:50:41.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-089a7e94-2a29-11ea-9252-0242ac110005 container test-container: STEP: delete the pod Dec 29 10:50:42.072: INFO: Waiting for pod pod-089a7e94-2a29-11ea-9252-0242ac110005 to disappear Dec 29 10:50:42.085: INFO: Pod pod-089a7e94-2a29-11ea-9252-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:50:42.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9dk4m" for this suite. Dec 29 10:50:48.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:50:48.329: INFO: namespace: e2e-tests-emptydir-9dk4m, resource: bindings, ignored listing per whitelist Dec 29 10:50:48.438: INFO: namespace e2e-tests-emptydir-9dk4m deletion completed in 6.340784682s • [SLOW TEST:17.220 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:50:48.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Dec 29 10:50:48.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 29 10:50:48.880: INFO: stderr: "" Dec 29 10:50:48.881: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:50:48.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dlcn9" for this suite. Dec 29 10:50:54.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:50:55.111: INFO: namespace: e2e-tests-kubectl-dlcn9, resource: bindings, ignored listing per whitelist Dec 29 10:50:55.129: INFO: namespace e2e-tests-kubectl-dlcn9 deletion completed in 6.228256445s • [SLOW TEST:6.690 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:50:55.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-6bhjh/configmap-test-16dbeafe-2a29-11ea-9252-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 29 10:50:55.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-6bhjh" to be "success or failure" Dec 29 10:50:55.466: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.161097ms Dec 29 10:50:57.488: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068487395s Dec 29 10:50:59.498: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078153935s Dec 29 10:51:01.559: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139593401s Dec 29 10:51:03.604: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184444752s Dec 29 10:51:05.618: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197789391s Dec 29 10:51:07.695: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.27549443s STEP: Saw pod success Dec 29 10:51:07.696: INFO: Pod "pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005" satisfied condition "success or failure" Dec 29 10:51:07.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005 container env-test: STEP: delete the pod Dec 29 10:51:07.902: INFO: Waiting for pod pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005 to disappear Dec 29 10:51:07.912: INFO: Pod pod-configmaps-16ddd819-2a29-11ea-9252-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:51:07.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6bhjh" for this suite. Dec 29 10:51:14.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:51:14.221: INFO: namespace: e2e-tests-configmap-6bhjh, resource: bindings, ignored listing per whitelist Dec 29 10:51:14.239: INFO: namespace e2e-tests-configmap-6bhjh deletion completed in 6.316579595s • [SLOW TEST:19.110 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:51:14.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 29 10:51:24.788: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:51:52.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-bz5zd" for this suite. Dec 29 10:51:58.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:51:58.682: INFO: namespace: e2e-tests-namespaces-bz5zd, resource: bindings, ignored listing per whitelist Dec 29 10:51:58.806: INFO: namespace e2e-tests-namespaces-bz5zd deletion completed in 6.375271334s STEP: Destroying namespace "e2e-tests-nsdeletetest-qfcb8" for this suite. Dec 29 10:51:58.810: INFO: Namespace e2e-tests-nsdeletetest-qfcb8 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-8g2l4" for this suite. Dec 29 10:52:05.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:52:05.158: INFO: namespace: e2e-tests-nsdeletetest-8g2l4, resource: bindings, ignored listing per whitelist Dec 29 10:52:05.257: INFO: namespace e2e-tests-nsdeletetest-8g2l4 deletion completed in 6.447185746s • [SLOW TEST:51.018 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:52:05.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 29 10:52:05.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-t6hqj' Dec 29 10:52:05.544: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 29 10:52:05.544: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 29 10:52:07.982: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bjbjk] Dec 29 10:52:07.983: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bjbjk" in namespace "e2e-tests-kubectl-t6hqj" to be "running and ready" Dec 29 10:52:07.990: INFO: Pod "e2e-test-nginx-rc-bjbjk": Phase="Pending", Reason="", readiness=false. Elapsed: 7.125573ms Dec 29 10:52:10.009: INFO: Pod "e2e-test-nginx-rc-bjbjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025989926s Dec 29 10:52:12.173: INFO: Pod "e2e-test-nginx-rc-bjbjk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190257898s Dec 29 10:52:14.199: INFO: Pod "e2e-test-nginx-rc-bjbjk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216620718s Dec 29 10:52:16.240: INFO: Pod "e2e-test-nginx-rc-bjbjk": Phase="Running", Reason="", readiness=true. Elapsed: 8.256968047s Dec 29 10:52:16.240: INFO: Pod "e2e-test-nginx-rc-bjbjk" satisfied condition "running and ready" Dec 29 10:52:16.240: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bjbjk] Dec 29 10:52:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-t6hqj' Dec 29 10:52:16.473: INFO: stderr: "" Dec 29 10:52:16.473: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Dec 29 10:52:16.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-t6hqj' Dec 29 10:52:16.643: INFO: stderr: "" Dec 29 10:52:16.643: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:52:16.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t6hqj" for this suite. Dec 29 10:52:40.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:52:40.871: INFO: namespace: e2e-tests-kubectl-t6hqj, resource: bindings, ignored listing per whitelist Dec 29 10:52:40.892: INFO: namespace e2e-tests-kubectl-t6hqj deletion completed in 24.239965015s • [SLOW TEST:35.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:52:40.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:53:41.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5dhhj" for this suite. Dec 29 10:54:05.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:54:05.240: INFO: namespace: e2e-tests-container-probe-5dhhj, resource: bindings, ignored listing per whitelist Dec 29 10:54:05.350: INFO: namespace e2e-tests-container-probe-5dhhj deletion completed in 24.263581819s • [SLOW TEST:84.458 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:54:05.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:54:15.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ld2s9" for this suite. Dec 29 10:54:22.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:54:22.089: INFO: namespace: e2e-tests-emptydir-wrapper-ld2s9, resource: bindings, ignored listing per whitelist Dec 29 10:54:22.301: INFO: namespace e2e-tests-emptydir-wrapper-ld2s9 deletion completed in 6.45709051s • [SLOW TEST:16.951 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:54:22.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 29 10:54:22.739: INFO: Waiting up to 5m0s for pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005" in namespace "e2e-tests-var-expansion-2mwrr" to be "success or failure" Dec 29 10:54:22.753: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.012697ms Dec 29 10:54:25.141: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4025797s Dec 29 10:54:27.160: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421397889s Dec 29 10:54:29.177: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438109927s Dec 29 10:54:31.885: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.146037458s Dec 29 10:54:33.896: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.157614261s STEP: Saw pod success Dec 29 10:54:33.897: INFO: Pod "var-expansion-926f8c92-2a29-11ea-9252-0242ac110005" satisfied condition "success or failure" Dec 29 10:54:33.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-926f8c92-2a29-11ea-9252-0242ac110005 container dapi-container: STEP: delete the pod Dec 29 10:54:34.046: INFO: Waiting for pod var-expansion-926f8c92-2a29-11ea-9252-0242ac110005 to disappear Dec 29 10:54:34.064: INFO: Pod var-expansion-926f8c92-2a29-11ea-9252-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 29 10:54:34.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-2mwrr" for this suite. Dec 29 10:54:40.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 29 10:54:40.229: INFO: namespace: e2e-tests-var-expansion-2mwrr, resource: bindings, ignored listing per whitelist Dec 29 10:54:40.229: INFO: namespace e2e-tests-var-expansion-2mwrr deletion completed in 6.156303981s • [SLOW TEST:17.928 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 29 10:54:40.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 29 10:54:40.530: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 96.243334ms)
Dec 29 10:54:40.594: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 64.529397ms)
Dec 29 10:54:40.633: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 38.130596ms)
Dec 29 10:54:40.659: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.878675ms)
Dec 29 10:54:40.674: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.269231ms)
Dec 29 10:54:40.706: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.032305ms)
Dec 29 10:54:40.734: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.983168ms)
Dec 29 10:54:40.753: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.369417ms)
Dec 29 10:54:40.763: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.445123ms)
Dec 29 10:54:40.767: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.505386ms)
Dec 29 10:54:40.771: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.165683ms)
Dec 29 10:54:40.774: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.98726ms)
Dec 29 10:54:40.777: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.977224ms)
Dec 29 10:54:40.782: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.470938ms)
Dec 29 10:54:40.785: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.407695ms)
Dec 29 10:54:40.789: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.243293ms)
Dec 29 10:54:40.792: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.137777ms)
Dec 29 10:54:40.796: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.463278ms)
Dec 29 10:54:40.929: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 132.831396ms)
Dec 29 10:54:40.937: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.880786ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:54:40.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kfsgj" for this suite.
Dec 29 10:54:46.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:54:47.047: INFO: namespace: e2e-tests-proxy-kfsgj, resource: bindings, ignored listing per whitelist
Dec 29 10:54:47.115: INFO: namespace e2e-tests-proxy-kfsgj deletion completed in 6.172577722s

• [SLOW TEST:6.886 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:54:47.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-mmvl
STEP: Creating a pod to test atomic-volume-subpath
Dec 29 10:54:47.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mmvl" in namespace "e2e-tests-subpath-xm72v" to be "success or failure"
Dec 29 10:54:47.345: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 19.783483ms
Dec 29 10:54:49.661: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335683679s
Dec 29 10:54:51.679: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353719944s
Dec 29 10:54:54.320: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.994539639s
Dec 29 10:54:56.336: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.010282949s
Dec 29 10:54:58.349: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023942543s
Dec 29 10:55:00.368: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.042583134s
Dec 29 10:55:02.378: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.053018239s
Dec 29 10:55:04.398: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Pending", Reason="", readiness=false. Elapsed: 17.073120635s
Dec 29 10:55:06.416: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 19.090819325s
Dec 29 10:55:08.447: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 21.121298985s
Dec 29 10:55:10.470: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 23.145105271s
Dec 29 10:55:12.505: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 25.179938783s
Dec 29 10:55:14.532: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 27.206606007s
Dec 29 10:55:16.563: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 29.238067209s
Dec 29 10:55:18.603: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 31.277295731s
Dec 29 10:55:20.618: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 33.293019334s
Dec 29 10:55:22.652: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Running", Reason="", readiness=false. Elapsed: 35.32668685s
Dec 29 10:55:24.679: INFO: Pod "pod-subpath-test-downwardapi-mmvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.354164841s
STEP: Saw pod success
Dec 29 10:55:24.680: INFO: Pod "pod-subpath-test-downwardapi-mmvl" satisfied condition "success or failure"
Dec 29 10:55:24.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-mmvl container test-container-subpath-downwardapi-mmvl: 
STEP: delete the pod
Dec 29 10:55:24.761: INFO: Waiting for pod pod-subpath-test-downwardapi-mmvl to disappear
Dec 29 10:55:24.798: INFO: Pod pod-subpath-test-downwardapi-mmvl no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-mmvl
Dec 29 10:55:24.798: INFO: Deleting pod "pod-subpath-test-downwardapi-mmvl" in namespace "e2e-tests-subpath-xm72v"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:55:24.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-xm72v" for this suite.
Dec 29 10:55:30.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:55:31.020: INFO: namespace: e2e-tests-subpath-xm72v, resource: bindings, ignored listing per whitelist
Dec 29 10:55:31.049: INFO: namespace e2e-tests-subpath-xm72v deletion completed in 6.233560723s

• [SLOW TEST:43.934 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:55:31.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 29 10:55:41.891: INFO: Successfully updated pod "pod-update-bb49ce06-2a29-11ea-9252-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Dec 29 10:55:41.913: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:55:41.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rbht9" for this suite.
Dec 29 10:56:05.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:56:06.059: INFO: namespace: e2e-tests-pods-rbht9, resource: bindings, ignored listing per whitelist
Dec 29 10:56:06.155: INFO: namespace e2e-tests-pods-rbht9 deletion completed in 24.236690124s

• [SLOW TEST:35.106 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:56:06.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 29 10:56:06.485: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:56:28.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cwhp6" for this suite.
Dec 29 10:56:53.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:56:53.128: INFO: namespace: e2e-tests-init-container-cwhp6, resource: bindings, ignored listing per whitelist
Dec 29 10:56:53.135: INFO: namespace e2e-tests-init-container-cwhp6 deletion completed in 24.151430771s

• [SLOW TEST:46.979 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:56:53.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ec31636c-2a29-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 10:56:53.335: INFO: Waiting up to 5m0s for pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-2g6t2" to be "success or failure"
Dec 29 10:56:53.405: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.628998ms
Dec 29 10:56:55.430: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095629528s
Dec 29 10:56:57.446: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110760806s
Dec 29 10:56:59.614: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278741924s
Dec 29 10:57:01.633: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298135603s
Dec 29 10:57:03.644: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.308924015s
STEP: Saw pod success
Dec 29 10:57:03.644: INFO: Pod "pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 10:57:03.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 10:57:04.673: INFO: Waiting for pod pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005 to disappear
Dec 29 10:57:04.843: INFO: Pod pod-secrets-ec31d69d-2a29-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:57:04.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2g6t2" for this suite.
Dec 29 10:57:10.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:57:11.116: INFO: namespace: e2e-tests-secrets-2g6t2, resource: bindings, ignored listing per whitelist
Dec 29 10:57:11.207: INFO: namespace e2e-tests-secrets-2g6t2 deletion completed in 6.338191605s

• [SLOW TEST:18.072 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:57:11.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 29 10:57:11.434: INFO: namespace e2e-tests-kubectl-86ltw
Dec 29 10:57:11.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86ltw'
Dec 29 10:57:11.708: INFO: stderr: ""
Dec 29 10:57:11.708: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 29 10:57:12.733: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:12.733: INFO: Found 0 / 1
Dec 29 10:57:14.104: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:14.104: INFO: Found 0 / 1
Dec 29 10:57:14.721: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:14.722: INFO: Found 0 / 1
Dec 29 10:57:15.749: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:15.749: INFO: Found 0 / 1
Dec 29 10:57:17.535: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:17.536: INFO: Found 0 / 1
Dec 29 10:57:17.733: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:17.733: INFO: Found 0 / 1
Dec 29 10:57:18.723: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:18.723: INFO: Found 0 / 1
Dec 29 10:57:20.070: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:20.071: INFO: Found 0 / 1
Dec 29 10:57:20.725: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:20.726: INFO: Found 1 / 1
Dec 29 10:57:20.726: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 29 10:57:20.731: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 10:57:20.731: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 29 10:57:20.731: INFO: wait on redis-master startup in e2e-tests-kubectl-86ltw 
Dec 29 10:57:20.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mzwcx redis-master --namespace=e2e-tests-kubectl-86ltw'
Dec 29 10:57:20.945: INFO: stderr: ""
Dec 29 10:57:20.946: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Dec 10:57:19.084 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Dec 10:57:19.084 # Server started, Redis version 3.2.12\n1:M 29 Dec 10:57:19.084 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Dec 10:57:19.084 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 29 10:57:20.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-86ltw'
Dec 29 10:57:21.109: INFO: stderr: ""
Dec 29 10:57:21.109: INFO: stdout: "service/rm2 exposed\n"
Dec 29 10:57:21.125: INFO: Service rm2 in namespace e2e-tests-kubectl-86ltw found.
STEP: exposing service
Dec 29 10:57:23.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-86ltw'
Dec 29 10:57:23.361: INFO: stderr: ""
Dec 29 10:57:23.361: INFO: stdout: "service/rm3 exposed\n"
Dec 29 10:57:23.462: INFO: Service rm3 in namespace e2e-tests-kubectl-86ltw found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:57:25.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-86ltw" for this suite.
Dec 29 10:57:41.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:57:41.591: INFO: namespace: e2e-tests-kubectl-86ltw, resource: bindings, ignored listing per whitelist
Dec 29 10:57:41.693: INFO: namespace e2e-tests-kubectl-86ltw deletion completed in 16.176737755s

• [SLOW TEST:30.485 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:57:41.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 29 10:57:41.951: INFO: Waiting up to 5m0s for pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005" in namespace "e2e-tests-var-expansion-g5xc2" to be "success or failure"
Dec 29 10:57:41.961: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.795227ms
Dec 29 10:57:43.973: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022099536s
Dec 29 10:57:45.985: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033962025s
Dec 29 10:57:48.064: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112657929s
Dec 29 10:57:50.519: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.567310445s
Dec 29 10:57:52.598: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.646497952s
STEP: Saw pod success
Dec 29 10:57:52.598: INFO: Pod "var-expansion-092168d9-2a2a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 10:57:52.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-092168d9-2a2a-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 10:57:52.844: INFO: Waiting for pod var-expansion-092168d9-2a2a-11ea-9252-0242ac110005 to disappear
Dec 29 10:57:52.966: INFO: Pod var-expansion-092168d9-2a2a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:57:52.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-g5xc2" for this suite.
Dec 29 10:57:59.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:57:59.194: INFO: namespace: e2e-tests-var-expansion-g5xc2, resource: bindings, ignored listing per whitelist
Dec 29 10:57:59.245: INFO: namespace e2e-tests-var-expansion-g5xc2 deletion completed in 6.270844918s

• [SLOW TEST:17.553 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:57:59.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 29 10:58:10.207: INFO: Successfully updated pod "pod-update-activedeadlineseconds-13a9372b-2a2a-11ea-9252-0242ac110005"
Dec 29 10:58:10.207: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-13a9372b-2a2a-11ea-9252-0242ac110005" in namespace "e2e-tests-pods-m94tf" to be "terminated due to deadline exceeded"
Dec 29 10:58:10.259: INFO: Pod "pod-update-activedeadlineseconds-13a9372b-2a2a-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 51.7115ms
Dec 29 10:58:12.346: INFO: Pod "pod-update-activedeadlineseconds-13a9372b-2a2a-11ea-9252-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.13917044s
Dec 29 10:58:12.346: INFO: Pod "pod-update-activedeadlineseconds-13a9372b-2a2a-11ea-9252-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 10:58:12.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-m94tf" for this suite.
Dec 29 10:58:18.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 10:58:18.596: INFO: namespace: e2e-tests-pods-m94tf, resource: bindings, ignored listing per whitelist
Dec 29 10:58:18.715: INFO: namespace e2e-tests-pods-m94tf deletion completed in 6.358785935s

• [SLOW TEST:19.469 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 10:58:18.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 29 11:01:21.720: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:21.843: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:23.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:23.862: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:25.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:25.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:27.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:27.873: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:29.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:29.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:31.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:31.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:33.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:33.881: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:35.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:35.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:37.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:37.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:39.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:39.882: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:41.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:41.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:43.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:43.864: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:45.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:45.867: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:47.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:47.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:49.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:49.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:51.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:52.216: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:53.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:53.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:55.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:55.916: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:57.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:57.870: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:01:59.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:01:59.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:01.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:01.873: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:03.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:03.885: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:05.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:05.870: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:07.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:07.868: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:09.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:09.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:11.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:11.873: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:13.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:13.874: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:15.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:15.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:17.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:17.891: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:19.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:19.912: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:21.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:21.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:23.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:23.866: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:25.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:25.868: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:27.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:27.872: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:29.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:29.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:31.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:31.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:33.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:33.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:35.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:35.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:37.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:37.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:39.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:39.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:41.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:41.872: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:43.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:43.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:45.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:45.868: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:47.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:47.872: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:49.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:49.874: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:51.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:51.861: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:53.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:53.922: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:55.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:55.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:57.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:57.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:02:59.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:02:59.872: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:03:01.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:03:01.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:03:03.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:03:03.870: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 29 11:03:05.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 29 11:03:05.867: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:03:05.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-669q9" for this suite.
Dec 29 11:03:29.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:03:30.062: INFO: namespace: e2e-tests-container-lifecycle-hook-669q9, resource: bindings, ignored listing per whitelist
Dec 29 11:03:30.145: INFO: namespace e2e-tests-container-lifecycle-hook-669q9 deletion completed in 24.268467303s

• [SLOW TEST:311.429 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:03:30.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 11:03:30.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-st85r" to be "success or failure"
Dec 29 11:03:30.401: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.11539ms
Dec 29 11:03:32.608: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235429794s
Dec 29 11:03:34.650: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277333261s
Dec 29 11:03:37.004: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632235359s
Dec 29 11:03:39.020: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647949952s
Dec 29 11:03:41.033: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66071586s
STEP: Saw pod success
Dec 29 11:03:41.033: INFO: Pod "downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:03:41.038: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 11:03:41.142: INFO: Waiting for pod downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005 to disappear
Dec 29 11:03:41.153: INFO: Pod downwardapi-volume-d8db878b-2a2a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:03:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-st85r" for this suite.
Dec 29 11:03:47.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:03:47.253: INFO: namespace: e2e-tests-projected-st85r, resource: bindings, ignored listing per whitelist
Dec 29 11:03:47.519: INFO: namespace e2e-tests-projected-st85r deletion completed in 6.357976368s

• [SLOW TEST:17.375 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:03:47.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e34faf92-2a2a-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:03:47.931: INFO: Waiting up to 5m0s for pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-kwb29" to be "success or failure"
Dec 29 11:03:47.960: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.925797ms
Dec 29 11:03:49.973: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042256213s
Dec 29 11:03:51.986: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055545099s
Dec 29 11:03:54.320: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389044022s
Dec 29 11:03:56.336: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405030155s
Dec 29 11:03:58.354: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423638697s
Dec 29 11:04:00.885: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.954703309s
STEP: Saw pod success
Dec 29 11:04:00.886: INFO: Pod "pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:04:00.900: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 11:04:01.513: INFO: Waiting for pod pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005 to disappear
Dec 29 11:04:01.562: INFO: Pod pod-secrets-e3538476-2a2a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:04:01.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kwb29" for this suite.
Dec 29 11:04:07.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:04:07.999: INFO: namespace: e2e-tests-secrets-kwb29, resource: bindings, ignored listing per whitelist
Dec 29 11:04:08.014: INFO: namespace e2e-tests-secrets-kwb29 deletion completed in 6.443335064s

• [SLOW TEST:20.494 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:04:08.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-7xfcs
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-7xfcs
STEP: Deleting pre-stop pod
Dec 29 11:04:33.597: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:04:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-7xfcs" for this suite.
Dec 29 11:05:13.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:05:13.927: INFO: namespace: e2e-tests-prestop-7xfcs, resource: bindings, ignored listing per whitelist
Dec 29 11:05:14.124: INFO: namespace e2e-tests-prestop-7xfcs deletion completed in 40.442192244s

• [SLOW TEST:66.110 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:05:14.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-16dcb1ef-2a2b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:05:14.444: INFO: Waiting up to 5m0s for pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-llfrw" to be "success or failure"
Dec 29 11:05:14.460: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.485588ms
Dec 29 11:05:16.479: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034964283s
Dec 29 11:05:18.500: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055617786s
Dec 29 11:05:20.836: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392016197s
Dec 29 11:05:22.912: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467524493s
Dec 29 11:05:24.936: INFO: Pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005": Phase="Failed", Reason="", readiness=false. Elapsed: 10.491266088s
Dec 29 11:05:25.007: INFO: Output of node "hunter-server-hu5at5svl7ps" pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" container "secret-env-test": failed to open log file "/var/log/pods/16e71877-2a2b-11ea-a994-fa163e34d433/secret-env-test/0.log": open /var/log/pods/16e71877-2a2b-11ea-a994-fa163e34d433/secret-env-test/0.log: no such file or directory
STEP: delete the pod
Dec 29 11:05:25.060: INFO: Waiting for pod pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:05:25.076: INFO: Pod pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005 no longer exists
Dec 29 11:05:25.076: INFO: Unexpected error occurred: expected pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" success: pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP: StartTime:2019-12-29 11:05:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:secret-env-test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:cannot join network of a non running container: 22787b65977f95c9d360544075f8866601a2a68fdda82b942974eb8982621816,StartedAt:2019-12-29 11:05:18 +0000 UTC,FinishedAt:2019-12-29 11:05:18 +0000 UTC,ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079}] QOSClass:BestEffort}
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-secrets-llfrw".
STEP: Found 4 events.
Dec 29 11:05:25.142: INFO: At 2019-12-29 11:05:14 +0000 UTC - event for pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005: {default-scheduler } Scheduled: Successfully assigned e2e-tests-secrets-llfrw/pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005 to hunter-server-hu5at5svl7ps
Dec 29 11:05:25.142: INFO: At 2019-12-29 11:05:18 +0000 UTC - event for pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Dec 29 11:05:25.142: INFO: At 2019-12-29 11:05:21 +0000 UTC - event for pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Dec 29 11:05:25.142: INFO: At 2019-12-29 11:05:22 +0000 UTC - event for pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Failed: Error: failed to start container "secret-env-test": Error response from daemon: cannot join network of a non running container: 22787b65977f95c9d360544075f8866601a2a68fdda82b942974eb8982621816
Dec 29 11:05:25.174: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:05:25.174: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 18:02:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 18:02:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 18:15:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 18:15:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:58:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-27 10:58:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Dec 29 11:05:25.174: INFO: 
Dec 29 11:05:25.182: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Dec 29 11:05:25.186: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:16449302,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-29 11:05:22 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-29 11:05:22 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-29 11:05:22 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-29 11:05:22 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70 nginx:latest] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 29 11:05:25.187: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Dec 29 11:05:25.190: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Dec 29 11:05:25.203: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Dec 29 11:05:25.203: INFO: 	Container coredns ready: true, restart count 0
Dec 29 11:05:25.203: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Dec 29 11:05:25.203: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 29 11:05:25.203: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 29 11:05:25.203: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Dec 29 11:05:25.203: INFO: 	Container weave ready: true, restart count 0
Dec 29 11:05:25.203: INFO: 	Container weave-npc ready: true, restart count 0
Dec 29 11:05:25.203: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Dec 29 11:05:25.203: INFO: 	Container coredns ready: true, restart count 0
Dec 29 11:05:25.203: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 29 11:05:25.203: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Dec 29 11:05:25.203: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W1229 11:05:25.207672       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 11:05:25.247: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Dec 29 11:05:25.247: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:31.397282s}
Dec 29 11:05:25.247: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.023849s}
Dec 29 11:05:25.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-llfrw" for this suite.
Dec 29 11:05:31.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:05:31.458: INFO: namespace: e2e-tests-secrets-llfrw, resource: bindings, ignored listing per whitelist
Dec 29 11:05:31.524: INFO: namespace e2e-tests-secrets-llfrw deletion completed in 6.271435195s

• Failure [17.400 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc000a15310>: {
          s: "expected pod \"pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005\" success: pod \"pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP: StartTime:2019-12-29 11:05:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:secret-env-test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:cannot join network of a non running container: 22787b65977f95c9d360544075f8866601a2a68fdda82b942974eb8982621816,StartedAt:2019-12-29 11:05:18 +0000 UTC,FinishedAt:2019-12-29 11:05:18 +0000 UTC,ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079}] QOSClass:BestEffort}",
      }
      expected pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" success: pod "pod-secrets-16dfba3e-2a2b-11ea-9252-0242ac110005" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [secret-env-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:05:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP: StartTime:2019-12-29 11:05:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:secret-env-test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:cannot join network of a non running container: 22787b65977f95c9d360544075f8866601a2a68fdda82b942974eb8982621816,StartedAt:2019-12-29 11:05:18 +0000 UTC,FinishedAt:2019-12-29 11:05:18 +0000 UTC,ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 ContainerID:docker://2010c92b3c3435ca2c259f56a6610871259d256b8f046cbb178df4fc300e1079}] QOSClass:BestEffort}
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:05:31.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 29 11:05:31.727: INFO: Waiting up to 5m0s for pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-containers-2gzb6" to be "success or failure"
Dec 29 11:05:31.744: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.152233ms
Dec 29 11:05:33.805: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077204531s
Dec 29 11:05:35.821: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093449123s
Dec 29 11:05:37.899: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1713379s
Dec 29 11:05:39.927: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199751425s
Dec 29 11:05:41.944: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.216160967s
STEP: Saw pod success
Dec 29 11:05:41.944: INFO: Pod "client-containers-21305237-2a2b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:05:41.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-21305237-2a2b-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:05:42.011: INFO: Waiting for pod client-containers-21305237-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:05:42.047: INFO: Pod client-containers-21305237-2a2b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:05:42.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-2gzb6" for this suite.
Dec 29 11:05:49.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:05:49.335: INFO: namespace: e2e-tests-containers-2gzb6, resource: bindings, ignored listing per whitelist
Dec 29 11:05:49.509: INFO: namespace e2e-tests-containers-2gzb6 deletion completed in 7.440121833s

• [SLOW TEST:17.984 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:05:49.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-59msm
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-59msm
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-59msm
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-59msm
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-59msm
Dec 29 11:06:03.945: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-59msm, name: ss-0, uid: 341495e2-2a2b-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 29 11:06:04.336: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-59msm, name: ss-0, uid: 341495e2-2a2b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 29 11:06:04.410: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-59msm, name: ss-0, uid: 341495e2-2a2b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 29 11:06:04.427: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-59msm
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-59msm
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-59msm and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 29 11:06:19.237: INFO: Deleting all statefulset in ns e2e-tests-statefulset-59msm
Dec 29 11:06:19.250: INFO: Scaling statefulset ss to 0
Dec 29 11:06:29.342: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:06:29.353: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:06:29.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-59msm" for this suite.
Dec 29 11:06:37.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:06:37.718: INFO: namespace: e2e-tests-statefulset-59msm, resource: bindings, ignored listing per whitelist
Dec 29 11:06:37.737: INFO: namespace e2e-tests-statefulset-59msm deletion completed in 8.31732398s

• [SLOW TEST:48.228 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:06:37.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 29 11:06:37.949: INFO: Waiting up to 5m0s for pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-sx5hh" to be "success or failure"
Dec 29 11:06:38.042: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.386352ms
Dec 29 11:06:40.052: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103054413s
Dec 29 11:06:42.143: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194474355s
Dec 29 11:06:44.232: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283275231s
Dec 29 11:06:46.772: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822915377s
Dec 29 11:06:48.783: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.834413258s
STEP: Saw pod success
Dec 29 11:06:48.783: INFO: Pod "pod-48a8c8c6-2a2b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:06:48.787: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-48a8c8c6-2a2b-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:06:48.858: INFO: Waiting for pod pod-48a8c8c6-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:06:48.865: INFO: Pod pod-48a8c8c6-2a2b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:06:48.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sx5hh" for this suite.
Dec 29 11:06:55.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:06:55.265: INFO: namespace: e2e-tests-emptydir-sx5hh, resource: bindings, ignored listing per whitelist
Dec 29 11:06:55.268: INFO: namespace e2e-tests-emptydir-sx5hh deletion completed in 6.396141821s

• [SLOW TEST:17.530 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:06:55.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1229 11:07:37.365503       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 11:07:37.365: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:07:37.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-r62ps" for this suite.
Dec 29 11:07:48.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:07:48.617: INFO: namespace: e2e-tests-gc-r62ps, resource: bindings, ignored listing per whitelist
Dec 29 11:07:48.639: INFO: namespace e2e-tests-gc-r62ps deletion completed in 11.264874956s

• [SLOW TEST:53.370 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:07:48.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 11:07:49.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ntg78'
Dec 29 11:07:52.381: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 29 11:07:52.381: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 29 11:07:54.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-ntg78'
Dec 29 11:07:56.139: INFO: stderr: ""
Dec 29 11:07:56.139: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:07:56.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ntg78" for this suite.
Dec 29 11:08:06.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:08:06.801: INFO: namespace: e2e-tests-kubectl-ntg78, resource: bindings, ignored listing per whitelist
Dec 29 11:08:06.850: INFO: namespace e2e-tests-kubectl-ntg78 deletion completed in 10.672908546s

• [SLOW TEST:18.210 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:08:06.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 29 11:08:07.208: INFO: Waiting up to 5m0s for pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-vbh8h" to be "success or failure"
Dec 29 11:08:07.221: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.475167ms
Dec 29 11:08:09.249: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040819369s
Dec 29 11:08:11.491: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282329922s
Dec 29 11:08:13.544: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335816634s
Dec 29 11:08:15.634: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42517843s
Dec 29 11:08:17.661: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.452134699s
Dec 29 11:08:19.678: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.469719018s
Dec 29 11:08:21.696: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.48761272s
STEP: Saw pod success
Dec 29 11:08:21.696: INFO: Pod "downward-api-7dddb919-2a2b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:08:21.703: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7dddb919-2a2b-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 11:08:21.881: INFO: Waiting for pod downward-api-7dddb919-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:08:21.931: INFO: Pod downward-api-7dddb919-2a2b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:08:21.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vbh8h" for this suite.
Dec 29 11:08:30.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:08:30.383: INFO: namespace: e2e-tests-downward-api-vbh8h, resource: bindings, ignored listing per whitelist
Dec 29 11:08:30.384: INFO: namespace e2e-tests-downward-api-vbh8h deletion completed in 8.427859447s

• [SLOW TEST:23.533 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:08:30.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:08:30.857: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:08:31.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-5j4kw" for this suite.
Dec 29 11:08:38.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:08:38.192: INFO: namespace: e2e-tests-custom-resource-definition-5j4kw, resource: bindings, ignored listing per whitelist
Dec 29 11:08:38.279: INFO: namespace e2e-tests-custom-resource-definition-5j4kw deletion completed in 6.259588206s

• [SLOW TEST:7.896 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:08:38.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:08:38.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:08:49.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6fd2q" for this suite.
Dec 29 11:09:31.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:09:31.570: INFO: namespace: e2e-tests-pods-6fd2q, resource: bindings, ignored listing per whitelist
Dec 29 11:09:31.657: INFO: namespace e2e-tests-pods-6fd2q deletion completed in 42.23637879s

• [SLOW TEST:53.377 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:09:31.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 29 11:09:31.946: INFO: Waiting up to 5m0s for pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-qkjqd" to be "success or failure"
Dec 29 11:09:31.977: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.115243ms
Dec 29 11:09:33.988: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041471322s
Dec 29 11:09:35.999: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052317611s
Dec 29 11:09:38.013: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066769738s
Dec 29 11:09:40.040: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093206009s
Dec 29 11:09:42.051: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104917475s
STEP: Saw pod success
Dec 29 11:09:42.051: INFO: Pod "pod-b05ff601-2a2b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:09:42.057: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b05ff601-2a2b-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:09:42.397: INFO: Waiting for pod pod-b05ff601-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:09:42.684: INFO: Pod pod-b05ff601-2a2b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:09:42.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qkjqd" for this suite.
Dec 29 11:09:48.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:09:48.868: INFO: namespace: e2e-tests-emptydir-qkjqd, resource: bindings, ignored listing per whitelist
Dec 29 11:09:49.011: INFO: namespace e2e-tests-emptydir-qkjqd deletion completed in 6.30467169s

• [SLOW TEST:17.353 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:09:49.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-baac76ed-2a2b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:09:49.216: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-lws2d" to be "success or failure"
Dec 29 11:09:49.235: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.852311ms
Dec 29 11:09:51.250: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033179333s
Dec 29 11:09:53.265: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048514568s
Dec 29 11:09:55.449: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232046665s
Dec 29 11:09:57.455: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238563435s
Dec 29 11:09:59.484: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.267355593s
STEP: Saw pod success
Dec 29 11:09:59.484: INFO: Pod "pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:09:59.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 11:10:00.505: INFO: Waiting for pod pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005 to disappear
Dec 29 11:10:00.699: INFO: Pod pod-projected-configmaps-baad108a-2a2b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:10:00.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lws2d" for this suite.
Dec 29 11:10:06.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:10:06.943: INFO: namespace: e2e-tests-projected-lws2d, resource: bindings, ignored listing per whitelist
Dec 29 11:10:06.977: INFO: namespace e2e-tests-projected-lws2d deletion completed in 6.257312343s

• [SLOW TEST:17.965 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:10:06.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:10:07.164: INFO: Creating deployment "test-recreate-deployment"
Dec 29 11:10:07.181: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 29 11:10:07.269: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 29 11:10:09.457: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 29 11:10:09.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 11:10:11.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 11:10:13.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 11:10:15.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713214607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 11:10:17.513: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 29 11:10:17.536: INFO: Updating deployment test-recreate-deployment
Dec 29 11:10:17.536: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 29 11:10:18.476: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-ccnzb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccnzb/deployments/test-recreate-deployment,UID:c5621b8e-2a2b-11ea-a994-fa163e34d433,ResourceVersion:16450235,Generation:2,CreationTimestamp:2019-12-29 11:10:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-29 11:10:18 +0000 UTC 2019-12-29 11:10:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-29 11:10:18 +0000 UTC 2019-12-29 11:10:07 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 29 11:10:18.491: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-ccnzb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccnzb/replicasets/test-recreate-deployment-589c4bfd,UID:cbd07cc4-2a2b-11ea-a994-fa163e34d433,ResourceVersion:16450232,Generation:1,CreationTimestamp:2019-12-29 11:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c5621b8e-2a2b-11ea-a994-fa163e34d433 0xc001c1fc8f 0xc001c1fca0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 11:10:18.491: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 29 11:10:18.491: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-ccnzb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccnzb/replicasets/test-recreate-deployment-5bf7f65dc,UID:c5715641-2a2b-11ea-a994-fa163e34d433,ResourceVersion:16450223,Generation:2,CreationTimestamp:2019-12-29 11:10:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c5621b8e-2a2b-11ea-a994-fa163e34d433 0xc001c1fd60 0xc001c1fd61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 11:10:18.500: INFO: Pod "test-recreate-deployment-589c4bfd-kcwxl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-kcwxl,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-ccnzb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccnzb/pods/test-recreate-deployment-589c4bfd-kcwxl,UID:cbd3368c-2a2b-11ea-a994-fa163e34d433,ResourceVersion:16450229,Generation:0,CreationTimestamp:2019-12-29 11:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd cbd07cc4-2a2b-11ea-a994-fa163e34d433 0xc001c42abf 0xc001c42ad0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8vhfg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8vhfg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8vhfg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c42b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c42b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:10:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:10:18.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ccnzb" for this suite.
Dec 29 11:10:32.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:10:32.737: INFO: namespace: e2e-tests-deployment-ccnzb, resource: bindings, ignored listing per whitelist
Dec 29 11:10:32.768: INFO: namespace e2e-tests-deployment-ccnzb deletion completed in 14.255043214s

• [SLOW TEST:25.791 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:10:32.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 29 11:10:43.539: INFO: Successfully updated pod "labelsupdated4b7d43e-2a2b-11ea-9252-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:10:45.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8t6lc" for this suite.
Dec 29 11:11:09.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:11:09.969: INFO: namespace: e2e-tests-downward-api-8t6lc, resource: bindings, ignored listing per whitelist
Dec 29 11:11:09.972: INFO: namespace e2e-tests-downward-api-8t6lc deletion completed in 24.193633373s

• [SLOW TEST:37.203 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:11:09.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:11:20.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-l6qjm" for this suite.
Dec 29 11:12:04.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:12:04.598: INFO: namespace: e2e-tests-kubelet-test-l6qjm, resource: bindings, ignored listing per whitelist
Dec 29 11:12:04.661: INFO: namespace e2e-tests-kubelet-test-l6qjm deletion completed in 44.316183592s

• [SLOW TEST:54.688 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:12:04.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pqmt8
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pqmt8
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pqmt8
Dec 29 11:12:04.883: INFO: Found 0 stateful pods, waiting for 1
Dec 29 11:12:14.896: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 29 11:12:14.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:12:15.736: INFO: stderr: ""
Dec 29 11:12:15.736: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:12:15.736: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:12:15.756: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 29 11:12:25.770: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:12:25.770: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:12:25.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999455s
Dec 29 11:12:26.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976454141s
Dec 29 11:12:27.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.955022314s
Dec 29 11:12:28.894: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.925177565s
Dec 29 11:12:30.055: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.912037952s
Dec 29 11:12:31.065: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.750868788s
Dec 29 11:12:32.083: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.740986637s
Dec 29 11:12:33.094: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.722730571s
Dec 29 11:12:34.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.711979504s
Dec 29 11:12:35.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 679.673572ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pqmt8
Dec 29 11:12:36.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:12:36.991: INFO: stderr: ""
Dec 29 11:12:36.992: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:12:36.992: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:12:37.098: INFO: Found 2 stateful pods, waiting for 3
Dec 29 11:12:47.268: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:12:47.268: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:12:47.268: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 29 11:12:57.115: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:12:57.116: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:12:57.116: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 29 11:13:07.115: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:13:07.115: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:13:07.115: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 29 11:13:07.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:13:07.735: INFO: stderr: ""
Dec 29 11:13:07.736: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:13:07.736: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:13:07.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:13:08.340: INFO: stderr: ""
Dec 29 11:13:08.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:13:08.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:13:08.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:13:08.955: INFO: stderr: ""
Dec 29 11:13:08.955: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:13:08.955: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:13:08.955: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:13:08.969: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 29 11:13:19.009: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:13:19.009: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:13:19.009: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:13:19.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999994421s
Dec 29 11:13:20.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980370502s
Dec 29 11:13:21.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.921800369s
Dec 29 11:13:22.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.902950405s
Dec 29 11:13:23.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.880399204s
Dec 29 11:13:24.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.858984655s
Dec 29 11:13:25.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.817678909s
Dec 29 11:13:26.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.667261784s
Dec 29 11:13:27.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.615662907s
Dec 29 11:13:28.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 586.158216ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pqmt8
Dec 29 11:13:29.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:13:30.314: INFO: stderr: ""
Dec 29 11:13:30.315: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:13:30.315: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:13:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:13:30.968: INFO: stderr: ""
Dec 29 11:13:30.968: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:13:30.968: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:13:30.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:13:31.391: INFO: rc: 126
Dec 29 11:13:31.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc00216e8a0 exit status 126   true [0xc00132a2b8 0xc00132a2d0 0xc00132a2e8] [0xc00132a2b8 0xc00132a2d0 0xc00132a2e8] [0xc00132a2c8 0xc00132a2e0] [0x935700 0x935700] 0xc0017a1380 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 29 11:13:41.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:13:41.546: INFO: rc: 1
Dec 29 11:13:41.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d7ff50 exit status 1   true [0xc001c822f8 0xc001c82310 0xc001c82328] [0xc001c822f8 0xc001c82310 0xc001c82328] [0xc001c82308 0xc001c82320] [0x935700 0x935700] 0xc00187fe60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:13:51.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:13:51.642: INFO: rc: 1
Dec 29 11:13:51.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d8ddd0 exit status 1   true [0xc00000fd98 0xc00000fdd0 0xc00000fdf0] [0xc00000fd98 0xc00000fdd0 0xc00000fdf0] [0xc00000fdc8 0xc00000fde8] [0x935700 0x935700] 0xc001fceba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:01.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:01.802: INFO: rc: 1
Dec 29 11:14:01.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00162e5d0 exit status 1   true [0xc0017aa178 0xc0017aa190 0xc0017aa1a8] [0xc0017aa178 0xc0017aa190 0xc0017aa1a8] [0xc0017aa188 0xc0017aa1a0] [0x935700 0x935700] 0xc001e89440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:11.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:11.931: INFO: rc: 1
Dec 29 11:14:11.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00162e6f0 exit status 1   true [0xc0017aa1b8 0xc0017aa1d0 0xc0017aa1e8] [0xc0017aa1b8 0xc0017aa1d0 0xc0017aa1e8] [0xc0017aa1c8 0xc0017aa1e0] [0x935700 0x935700] 0xc001e89800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:21.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:22.059: INFO: rc: 1
Dec 29 11:14:22.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001644d20 exit status 1   true [0xc00016e000 0xc00132a010 0xc00132a030] [0xc00016e000 0xc00132a010 0xc00132a030] [0xc00132a008 0xc00132a028] [0x935700 0x935700] 0xc001ce7da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:32.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:32.191: INFO: rc: 1
Dec 29 11:14:32.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000011500 exit status 1   true [0xc0017aa000 0xc0017aa018 0xc0017aa030] [0xc0017aa000 0xc0017aa018 0xc0017aa030] [0xc0017aa010 0xc0017aa028] [0x935700 0x935700] 0xc0020de240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:42.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:42.363: INFO: rc: 1
Dec 29 11:14:42.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001534120 exit status 1   true [0xc001c82000 0xc001c82018 0xc001c82030] [0xc001c82000 0xc001c82018 0xc001c82030] [0xc001c82010 0xc001c82028] [0x935700 0x935700] 0xc00187e1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:14:52.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:14:52.553: INFO: rc: 1
Dec 29 11:14:52.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000011620 exit status 1   true [0xc0017aa038 0xc0017aa050 0xc0017aa068] [0xc0017aa038 0xc0017aa050 0xc0017aa068] [0xc0017aa048 0xc0017aa060] [0x935700 0x935700] 0xc0020de4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:02.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:02.711: INFO: rc: 1
Dec 29 11:15:02.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000011770 exit status 1   true [0xc0017aa070 0xc0017aa088 0xc0017aa0a0] [0xc0017aa070 0xc0017aa088 0xc0017aa0a0] [0xc0017aa080 0xc0017aa098] [0x935700 0x935700] 0xc0020df4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:12.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:12.853: INFO: rc: 1
Dec 29 11:15:12.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015342a0 exit status 1   true [0xc001c82038 0xc001c82050 0xc001c82068] [0xc001c82038 0xc001c82050 0xc001c82068] [0xc001c82048 0xc001c82060] [0x935700 0x935700] 0xc00187e480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:22.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:22.978: INFO: rc: 1
Dec 29 11:15:22.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001644ed0 exit status 1   true [0xc00132a038 0xc00132a050 0xc00132a068] [0xc00132a038 0xc00132a050 0xc00132a068] [0xc00132a048 0xc00132a060] [0x935700 0x935700] 0xc001e88060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:32.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:33.118: INFO: rc: 1
Dec 29 11:15:33.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001644ff0 exit status 1   true [0xc00132a070 0xc00132a088 0xc00132a0a0] [0xc00132a070 0xc00132a088 0xc00132a0a0] [0xc00132a080 0xc00132a098] [0x935700 0x935700] 0xc001e88360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:43.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:43.243: INFO: rc: 1
Dec 29 11:15:43.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000011890 exit status 1   true [0xc0017aa0a8 0xc0017aa0c0 0xc0017aa0d8] [0xc0017aa0a8 0xc0017aa0c0 0xc0017aa0d8] [0xc0017aa0b8 0xc0017aa0d0] [0x935700 0x935700] 0xc0020df740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:15:53.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:15:53.364: INFO: rc: 1
Dec 29 11:15:53.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0013d81b0 exit status 1   true [0xc00000e1f8 0xc00000ee20 0xc00000efb8] [0xc00000e1f8 0xc00000ee20 0xc00000efb8] [0xc00000ec90 0xc00000eee8] [0x935700 0x935700] 0xc0017a01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:03.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:03.506: INFO: rc: 1
Dec 29 11:16:03.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001645140 exit status 1   true [0xc00132a0a8 0xc00132a0c0 0xc00132a0d8] [0xc00132a0a8 0xc00132a0c0 0xc00132a0d8] [0xc00132a0b8 0xc00132a0d0] [0x935700 0x935700] 0xc001e88600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:13.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:13.646: INFO: rc: 1
Dec 29 11:16:13.647: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c7c180 exit status 1   true [0xc00185a008 0xc00185a020 0xc00185a038] [0xc00185a008 0xc00185a020 0xc00185a038] [0xc00185a018 0xc00185a030] [0x935700 0x935700] 0xc001fce1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:23.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:23.836: INFO: rc: 1
Dec 29 11:16:23.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c7c2d0 exit status 1   true [0xc00016e000 0xc001c82010 0xc001c82028] [0xc00016e000 0xc001c82010 0xc001c82028] [0xc001c82008 0xc001c82020] [0x935700 0x935700] 0xc001ce7da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:33.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:33.960: INFO: rc: 1
Dec 29 11:16:33.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001644d50 exit status 1   true [0xc00185a040 0xc00185a058 0xc00185a070] [0xc00185a040 0xc00185a058 0xc00185a070] [0xc00185a050 0xc00185a068] [0x935700 0x935700] 0xc001fce480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:43.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:44.051: INFO: rc: 1
Dec 29 11:16:44.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001644f00 exit status 1   true [0xc00185a078 0xc00185a090 0xc00185a0a8] [0xc00185a078 0xc00185a090 0xc00185a0a8] [0xc00185a088 0xc00185a0a0] [0x935700 0x935700] 0xc001fce720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:16:54.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:16:54.153: INFO: rc: 1
Dec 29 11:16:54.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000011530 exit status 1   true [0xc00132a000 0xc00132a018 0xc00132a038] [0xc00132a000 0xc00132a018 0xc00132a038] [0xc00132a010 0xc00132a030] [0x935700 0x935700] 0xc001e881e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:04.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:04.251: INFO: rc: 1
Dec 29 11:17:04.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001645050 exit status 1   true [0xc00185a0b0 0xc00185a0c8 0xc00185a0e0] [0xc00185a0b0 0xc00185a0c8 0xc00185a0e0] [0xc00185a0c0 0xc00185a0d8] [0x935700 0x935700] 0xc001fce9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:14.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:14.368: INFO: rc: 1
Dec 29 11:17:14.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001534180 exit status 1   true [0xc0017aa000 0xc0017aa018 0xc0017aa030] [0xc0017aa000 0xc0017aa018 0xc0017aa030] [0xc0017aa010 0xc0017aa028] [0x935700 0x935700] 0xc00187e1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:24.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:24.528: INFO: rc: 1
Dec 29 11:17:24.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016451d0 exit status 1   true [0xc00185a0e8 0xc00185a100 0xc00185a118] [0xc00185a0e8 0xc00185a100 0xc00185a118] [0xc00185a0f8 0xc00185a110] [0x935700 0x935700] 0xc001fcec60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:34.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:34.690: INFO: rc: 1
Dec 29 11:17:34.691: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001534300 exit status 1   true [0xc0017aa038 0xc0017aa050 0xc0017aa068] [0xc0017aa038 0xc0017aa050 0xc0017aa068] [0xc0017aa048 0xc0017aa060] [0x935700 0x935700] 0xc00187e480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:44.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:44.820: INFO: rc: 1
Dec 29 11:17:44.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c7c600 exit status 1   true [0xc001c82030 0xc001c82048 0xc001c82060] [0xc001c82030 0xc001c82048 0xc001c82060] [0xc001c82040 0xc001c82058] [0x935700 0x935700] 0xc0020de060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:17:54.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:17:54.958: INFO: rc: 1
Dec 29 11:17:54.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001645320 exit status 1   true [0xc00185a120 0xc00185a138 0xc00185a150] [0xc00185a120 0xc00185a138 0xc00185a150] [0xc00185a130 0xc00185a148] [0x935700 0x935700] 0xc001fcef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:18:04.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:18:05.157: INFO: rc: 1
Dec 29 11:18:05.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c7c750 exit status 1   true [0xc001c82068 0xc001c82080 0xc001c82098] [0xc001c82068 0xc001c82080 0xc001c82098] [0xc001c82078 0xc001c82090] [0x935700 0x935700] 0xc0020de360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:18:15.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:18:15.563: INFO: rc: 1
Dec 29 11:18:15.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001534540 exit status 1   true [0xc0017aa078 0xc0017aa090 0xc0017aa0a8] [0xc0017aa078 0xc0017aa090 0xc0017aa0a8] [0xc0017aa088 0xc0017aa0a0] [0x935700 0x935700] 0xc00187e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:18:25.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:18:25.688: INFO: rc: 1
Dec 29 11:18:25.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c7c1e0 exit status 1   true [0xc00016e000 0xc001c82010 0xc001c82028] [0xc00016e000 0xc001c82010 0xc001c82028] [0xc001c82008 0xc001c82020] [0x935700 0x935700] 0xc001ce7da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 29 11:18:35.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pqmt8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:18:35.841: INFO: rc: 1
Dec 29 11:18:35.841: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 29 11:18:35.841: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 29 11:18:35.880: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pqmt8
Dec 29 11:18:35.890: INFO: Scaling statefulset ss to 0
Dec 29 11:18:35.908: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:18:35.911: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:18:36.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pqmt8" for this suite.
Dec 29 11:18:44.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:18:44.173: INFO: namespace: e2e-tests-statefulset-pqmt8, resource: bindings, ignored listing per whitelist
Dec 29 11:18:44.246: INFO: namespace e2e-tests-statefulset-pqmt8 deletion completed in 8.226968655s

• [SLOW TEST:399.584 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:18:44.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 29 11:18:44.424: INFO: Waiting up to 5m0s for pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-gq4gr" to be "success or failure"
Dec 29 11:18:44.456: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.236786ms
Dec 29 11:18:46.492: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068767576s
Dec 29 11:18:48.533: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10889904s
Dec 29 11:18:50.596: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172472134s
Dec 29 11:18:52.614: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190102275s
Dec 29 11:18:54.681: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.257371579s
STEP: Saw pod success
Dec 29 11:18:54.681: INFO: Pod "downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:18:54.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 11:18:55.005: INFO: Waiting for pod downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005 to disappear
Dec 29 11:18:55.017: INFO: Pod downward-api-f9a7de62-2a2c-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:18:55.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gq4gr" for this suite.
Dec 29 11:19:01.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:19:01.270: INFO: namespace: e2e-tests-downward-api-gq4gr, resource: bindings, ignored listing per whitelist
Dec 29 11:19:01.343: INFO: namespace e2e-tests-downward-api-gq4gr deletion completed in 6.316995327s

• [SLOW TEST:17.097 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:19:01.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1229 11:19:15.814184       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 11:19:15.814: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:19:15.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bf46l" for this suite.
Dec 29 11:19:43.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:19:43.567: INFO: namespace: e2e-tests-gc-bf46l, resource: bindings, ignored listing per whitelist
Dec 29 11:19:43.574: INFO: namespace e2e-tests-gc-bf46l deletion completed in 27.75481804s

• [SLOW TEST:42.231 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:19:43.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1d2dd82e-2a2d-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:19:44.043: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-89q58" to be "success or failure"
Dec 29 11:19:44.060: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.940677ms
Dec 29 11:19:46.311: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267980564s
Dec 29 11:19:48.325: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281546654s
Dec 29 11:19:50.338: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29510448s
Dec 29 11:19:52.375: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331361546s
Dec 29 11:19:54.389: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.345640639s
Dec 29 11:19:56.408: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.365171729s
STEP: Saw pod success
Dec 29 11:19:56.409: INFO: Pod "pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:19:56.420: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 11:19:56.857: INFO: Waiting for pod pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005 to disappear
Dec 29 11:19:56.871: INFO: Pod pod-projected-configmaps-1d3082d7-2a2d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:19:56.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-89q58" for this suite.
Dec 29 11:20:02.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:20:03.012: INFO: namespace: e2e-tests-projected-89q58, resource: bindings, ignored listing per whitelist
Dec 29 11:20:03.101: INFO: namespace e2e-tests-projected-89q58 deletion completed in 6.215991302s

• [SLOW TEST:19.526 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:20:03.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:20:13.733: INFO: Waiting up to 5m0s for pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005" in namespace "e2e-tests-pods-k2fcv" to be "success or failure"
Dec 29 11:20:13.975: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 242.550604ms
Dec 29 11:20:16.004: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271376649s
Dec 29 11:20:18.020: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287154376s
Dec 29 11:20:20.036: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303323812s
Dec 29 11:20:22.664: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930733966s
Dec 29 11:20:24.690: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957078609s
STEP: Saw pod success
Dec 29 11:20:24.690: INFO: Pod "client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:20:24.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 29 11:20:24.875: INFO: Waiting for pod client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005 to disappear
Dec 29 11:20:24.896: INFO: Pod client-envvars-2ee6d2f4-2a2d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:20:24.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k2fcv" for this suite.
Dec 29 11:21:18.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:21:19.038: INFO: namespace: e2e-tests-pods-k2fcv, resource: bindings, ignored listing per whitelist
Dec 29 11:21:19.083: INFO: namespace e2e-tests-pods-k2fcv deletion completed in 54.179105758s

• [SLOW TEST:75.981 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:21:19.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-560c477a-2a2d-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:21:19.415: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-wgf52" to be "success or failure"
Dec 29 11:21:19.430: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.978077ms
Dec 29 11:21:21.464: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048643633s
Dec 29 11:21:23.478: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062447841s
Dec 29 11:21:25.737: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321322104s
Dec 29 11:21:27.769: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353238807s
Dec 29 11:21:29.786: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.370152379s
Dec 29 11:21:31.815: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.399818906s
STEP: Saw pod success
Dec 29 11:21:31.815: INFO: Pod "pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:21:31.826: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 29 11:21:31.936: INFO: Waiting for pod pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005 to disappear
Dec 29 11:21:31.944: INFO: Pod pod-projected-secrets-560eb49b-2a2d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:21:31.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wgf52" for this suite.
Dec 29 11:21:37.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:21:38.070: INFO: namespace: e2e-tests-projected-wgf52, resource: bindings, ignored listing per whitelist
Dec 29 11:21:38.135: INFO: namespace e2e-tests-projected-wgf52 deletion completed in 6.183105529s

• [SLOW TEST:19.052 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:21:38.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 29 11:21:38.370: INFO: Waiting up to 5m0s for pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-x9b97" to be "success or failure"
Dec 29 11:21:38.406: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.344195ms
Dec 29 11:21:40.413: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042170718s
Dec 29 11:21:42.429: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058461144s
Dec 29 11:21:44.492: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121895398s
Dec 29 11:21:46.526: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155213196s
Dec 29 11:21:48.558: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187886381s
STEP: Saw pod success
Dec 29 11:21:48.559: INFO: Pod "downward-api-6159d3be-2a2d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:21:48.569: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6159d3be-2a2d-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 11:21:48.802: INFO: Waiting for pod downward-api-6159d3be-2a2d-11ea-9252-0242ac110005 to disappear
Dec 29 11:21:48.825: INFO: Pod downward-api-6159d3be-2a2d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:21:48.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x9b97" for this suite.
Dec 29 11:21:54.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:21:55.052: INFO: namespace: e2e-tests-downward-api-x9b97, resource: bindings, ignored listing per whitelist
Dec 29 11:21:55.058: INFO: namespace e2e-tests-downward-api-x9b97 deletion completed in 6.208999422s

• [SLOW TEST:16.923 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:21:55.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pvnnn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 29 11:21:55.285: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 29 11:22:29.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-pvnnn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 11:22:29.678: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 11:22:30.129: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:22:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-pvnnn" for this suite.
Dec 29 11:22:42.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:22:42.282: INFO: namespace: e2e-tests-pod-network-test-pvnnn, resource: bindings, ignored listing per whitelist
Dec 29 11:22:43.127: INFO: namespace e2e-tests-pod-network-test-pvnnn deletion completed in 12.98146154s

• [SLOW TEST:48.069 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:22:43.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 29 11:22:44.649: INFO: Pod name wrapped-volume-race-88c7f3e1-2a2d-11ea-9252-0242ac110005: Found 0 pods out of 5
Dec 29 11:22:49.681: INFO: Pod name wrapped-volume-race-88c7f3e1-2a2d-11ea-9252-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-88c7f3e1-2a2d-11ea-9252-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zh555, will wait for the garbage collector to delete the pods
Dec 29 11:24:52.060: INFO: Deleting ReplicationController wrapped-volume-race-88c7f3e1-2a2d-11ea-9252-0242ac110005 took: 77.255757ms
Dec 29 11:24:52.461: INFO: Terminating ReplicationController wrapped-volume-race-88c7f3e1-2a2d-11ea-9252-0242ac110005 pods took: 401.130598ms
STEP: Creating RC which spawns configmap-volume pods
Dec 29 11:25:43.700: INFO: Pod name wrapped-volume-race-f383de89-2a2d-11ea-9252-0242ac110005: Found 0 pods out of 5
Dec 29 11:25:48.755: INFO: Pod name wrapped-volume-race-f383de89-2a2d-11ea-9252-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f383de89-2a2d-11ea-9252-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zh555, will wait for the garbage collector to delete the pods
Dec 29 11:27:32.903: INFO: Deleting ReplicationController wrapped-volume-race-f383de89-2a2d-11ea-9252-0242ac110005 took: 26.018276ms
Dec 29 11:27:33.304: INFO: Terminating ReplicationController wrapped-volume-race-f383de89-2a2d-11ea-9252-0242ac110005 pods took: 400.989155ms
STEP: Creating RC which spawns configmap-volume pods
Dec 29 11:28:22.920: INFO: Pod name wrapped-volume-race-526e7e25-2a2e-11ea-9252-0242ac110005: Found 0 pods out of 5
Dec 29 11:28:27.949: INFO: Pod name wrapped-volume-race-526e7e25-2a2e-11ea-9252-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-526e7e25-2a2e-11ea-9252-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zh555, will wait for the garbage collector to delete the pods
Dec 29 11:30:22.102: INFO: Deleting ReplicationController wrapped-volume-race-526e7e25-2a2e-11ea-9252-0242ac110005 took: 19.18622ms
Dec 29 11:30:22.504: INFO: Terminating ReplicationController wrapped-volume-race-526e7e25-2a2e-11ea-9252-0242ac110005 pods took: 401.317697ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:31:14.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-zh555" for this suite.
Dec 29 11:31:23.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:31:23.095: INFO: namespace: e2e-tests-emptydir-wrapper-zh555, resource: bindings, ignored listing per whitelist
Dec 29 11:31:23.228: INFO: namespace e2e-tests-emptydir-wrapper-zh555 deletion completed in 8.218357605s

• [SLOW TEST:520.099 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:31:23.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-be099243-2a2e-11ea-9252-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-be099227-2a2e-11ea-9252-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 29 11:31:23.450: INFO: Waiting up to 5m0s for pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-dtw8r" to be "success or failure"
Dec 29 11:31:23.466: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.113401ms
Dec 29 11:31:26.030: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.579267236s
Dec 29 11:31:28.325: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874924809s
Dec 29 11:31:30.347: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.89610552s
Dec 29 11:31:32.704: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.253424893s
Dec 29 11:31:34.752: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.301787325s
Dec 29 11:31:36.764: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.313742256s
STEP: Saw pod success
Dec 29 11:31:36.764: INFO: Pod "projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:31:36.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 29 11:31:36.942: INFO: Waiting for pod projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005 to disappear
Dec 29 11:31:36.972: INFO: Pod projected-volume-be0991c2-2a2e-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:31:36.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dtw8r" for this suite.
Dec 29 11:31:43.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:31:43.068: INFO: namespace: e2e-tests-projected-dtw8r, resource: bindings, ignored listing per whitelist
Dec 29 11:31:43.194: INFO: namespace e2e-tests-projected-dtw8r deletion completed in 6.208684177s

• [SLOW TEST:19.965 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:31:43.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 29 11:31:43.435: INFO: PodSpec: initContainers in spec.initContainers
Dec 29 11:32:55.735: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ca043717-2a2e-11ea-9252-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-gq8dw", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-gq8dw/pods/pod-init-ca043717-2a2e-11ea-9252-0242ac110005", UID:"ca04ea94-2a2e-11ea-a994-fa163e34d433", ResourceVersion:"16452885", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713215903, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"435763443"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-s6jpp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002172700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s6jpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s6jpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s6jpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002382498), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f00000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002382510)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002382530)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002382538), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00238253c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713215903, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713215903, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713215903, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713215903, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00147a1e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016022a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5e1ef77275a74455209769b8ededbc6ecdb8861518aa1d7e20cc710c798d46ce"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00147a220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00147a200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:32:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gq8dw" for this suite.
Dec 29 11:33:19.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:33:19.950: INFO: namespace: e2e-tests-init-container-gq8dw, resource: bindings, ignored listing per whitelist
Dec 29 11:33:20.075: INFO: namespace e2e-tests-init-container-gq8dw deletion completed in 24.261576993s

• [SLOW TEST:96.881 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:33:20.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-03c9b1b2-2a2f-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-03c9b1b2-2a2f-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:34:57.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-88rct" for this suite.
Dec 29 11:35:21.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:35:21.224: INFO: namespace: e2e-tests-configmap-88rct, resource: bindings, ignored listing per whitelist
Dec 29 11:35:21.350: INFO: namespace e2e-tests-configmap-88rct deletion completed in 24.216023478s

• [SLOW TEST:121.276 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:35:21.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 29 11:35:21.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:23.966: INFO: stderr: ""
Dec 29 11:35:23.967: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 11:35:23.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:24.107: INFO: stderr: ""
Dec 29 11:35:24.108: INFO: stdout: "update-demo-nautilus-xzjdf "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 29 11:35:29.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:29.838: INFO: stderr: ""
Dec 29 11:35:29.838: INFO: stdout: "update-demo-nautilus-ltgsn update-demo-nautilus-xzjdf "
Dec 29 11:35:29.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltgsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:30.290: INFO: stderr: ""
Dec 29 11:35:30.290: INFO: stdout: ""
Dec 29 11:35:30.290: INFO: update-demo-nautilus-ltgsn is created but not running
Dec 29 11:35:35.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:35.471: INFO: stderr: ""
Dec 29 11:35:35.471: INFO: stdout: "update-demo-nautilus-ltgsn update-demo-nautilus-xzjdf "
Dec 29 11:35:35.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltgsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:35.669: INFO: stderr: ""
Dec 29 11:35:35.669: INFO: stdout: ""
Dec 29 11:35:35.669: INFO: update-demo-nautilus-ltgsn is created but not running
Dec 29 11:35:40.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:40.840: INFO: stderr: ""
Dec 29 11:35:40.840: INFO: stdout: "update-demo-nautilus-ltgsn update-demo-nautilus-xzjdf "
Dec 29 11:35:40.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltgsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:40.964: INFO: stderr: ""
Dec 29 11:35:40.964: INFO: stdout: "true"
Dec 29 11:35:40.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltgsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:41.063: INFO: stderr: ""
Dec 29 11:35:41.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 11:35:41.063: INFO: validating pod update-demo-nautilus-ltgsn
Dec 29 11:35:41.109: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 11:35:41.109: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 11:35:41.109: INFO: update-demo-nautilus-ltgsn is verified up and running
Dec 29 11:35:41.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzjdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:41.217: INFO: stderr: ""
Dec 29 11:35:41.218: INFO: stdout: "true"
Dec 29 11:35:41.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzjdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:35:41.319: INFO: stderr: ""
Dec 29 11:35:41.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 11:35:41.320: INFO: validating pod update-demo-nautilus-xzjdf
Dec 29 11:35:41.333: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 11:35:41.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 11:35:41.333: INFO: update-demo-nautilus-xzjdf is verified up and running
STEP: rolling-update to new replication controller
Dec 29 11:35:41.340: INFO: scanned /root for discovery docs: 
Dec 29 11:35:41.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:16.784: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 29 11:36:16.785: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 11:36:16.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:16.961: INFO: stderr: ""
Dec 29 11:36:16.961: INFO: stdout: "update-demo-kitten-82ncf update-demo-kitten-vfn6z "
Dec 29 11:36:16.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-82ncf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:17.081: INFO: stderr: ""
Dec 29 11:36:17.081: INFO: stdout: "true"
Dec 29 11:36:17.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-82ncf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:17.183: INFO: stderr: ""
Dec 29 11:36:17.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 29 11:36:17.183: INFO: validating pod update-demo-kitten-82ncf
Dec 29 11:36:17.215: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 29 11:36:17.215: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 29 11:36:17.215: INFO: update-demo-kitten-82ncf is verified up and running
Dec 29 11:36:17.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vfn6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:17.318: INFO: stderr: ""
Dec 29 11:36:17.318: INFO: stdout: "true"
Dec 29 11:36:17.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vfn6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7x64l'
Dec 29 11:36:17.426: INFO: stderr: ""
Dec 29 11:36:17.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 29 11:36:17.426: INFO: validating pod update-demo-kitten-vfn6z
Dec 29 11:36:17.436: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 29 11:36:17.436: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 29 11:36:17.436: INFO: update-demo-kitten-vfn6z is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:36:17.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7x64l" for this suite.
Dec 29 11:36:41.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:36:41.562: INFO: namespace: e2e-tests-kubectl-7x64l, resource: bindings, ignored listing per whitelist
Dec 29 11:36:41.642: INFO: namespace e2e-tests-kubectl-7x64l deletion completed in 24.20004264s

• [SLOW TEST:80.292 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:36:41.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7bdb7708-2a2f-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:36:41.825: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-ktfbh" to be "success or failure"
Dec 29 11:36:41.831: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.593632ms
Dec 29 11:36:44.172: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346888923s
Dec 29 11:36:46.183: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357385377s
Dec 29 11:36:48.201: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375298771s
Dec 29 11:36:50.958: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.132560607s
Dec 29 11:36:52.979: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.153843091s
STEP: Saw pod success
Dec 29 11:36:52.979: INFO: Pod "pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:36:52.990: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 11:36:53.319: INFO: Waiting for pod pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005 to disappear
Dec 29 11:36:53.429: INFO: Pod pod-projected-configmaps-7bdcde88-2a2f-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:36:53.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ktfbh" for this suite.
Dec 29 11:37:01.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:37:01.723: INFO: namespace: e2e-tests-projected-ktfbh, resource: bindings, ignored listing per whitelist
Dec 29 11:37:01.780: INFO: namespace e2e-tests-projected-ktfbh deletion completed in 8.331228429s

• [SLOW TEST:20.138 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:37:01.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005
Dec 29 11:37:02.029: INFO: Pod name my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005: Found 0 pods out of 1
Dec 29 11:37:07.065: INFO: Pod name my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005: Found 1 pods out of 1
Dec 29 11:37:07.066: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005" are running
Dec 29 11:37:13.105: INFO: Pod "my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005-vlvkk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:37:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:37:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:37:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 11:37:02 +0000 UTC Reason: Message:}])
Dec 29 11:37:13.105: INFO: Trying to dial the pod
Dec 29 11:37:18.162: INFO: Controller my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005: Got expected result from replica 1 [my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005-vlvkk]: "my-hostname-basic-87e6fcb3-2a2f-11ea-9252-0242ac110005-vlvkk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:37:18.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-gzxm4" for this suite.
Dec 29 11:37:26.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:37:26.838: INFO: namespace: e2e-tests-replication-controller-gzxm4, resource: bindings, ignored listing per whitelist
Dec 29 11:37:26.919: INFO: namespace e2e-tests-replication-controller-gzxm4 deletion completed in 8.742871802s

• [SLOW TEST:25.138 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:37:26.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 29 11:37:27.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2cf77,SelfLink:/api/v1/namespaces/e2e-tests-watch-2cf77/configmaps/e2e-watch-test-watch-closed,UID:96fc12b8-2a2f-11ea-a994-fa163e34d433,ResourceVersion:16453437,Generation:0,CreationTimestamp:2019-12-29 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 29 11:37:27.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2cf77,SelfLink:/api/v1/namespaces/e2e-tests-watch-2cf77/configmaps/e2e-watch-test-watch-closed,UID:96fc12b8-2a2f-11ea-a994-fa163e34d433,ResourceVersion:16453438,Generation:0,CreationTimestamp:2019-12-29 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 29 11:37:27.473: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2cf77,SelfLink:/api/v1/namespaces/e2e-tests-watch-2cf77/configmaps/e2e-watch-test-watch-closed,UID:96fc12b8-2a2f-11ea-a994-fa163e34d433,ResourceVersion:16453439,Generation:0,CreationTimestamp:2019-12-29 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 29 11:37:27.474: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2cf77,SelfLink:/api/v1/namespaces/e2e-tests-watch-2cf77/configmaps/e2e-watch-test-watch-closed,UID:96fc12b8-2a2f-11ea-a994-fa163e34d433,ResourceVersion:16453440,Generation:0,CreationTimestamp:2019-12-29 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:37:27.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2cf77" for this suite.
Dec 29 11:37:33.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:37:33.663: INFO: namespace: e2e-tests-watch-2cf77, resource: bindings, ignored listing per whitelist
Dec 29 11:37:33.834: INFO: namespace e2e-tests-watch-2cf77 deletion completed in 6.331921478s

• [SLOW TEST:6.914 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:37:33.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 29 11:37:34.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 29 11:37:34.212: INFO: stderr: ""
Dec 29 11:37:34.212: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:37:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fflb9" for this suite.
Dec 29 11:37:40.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:37:40.313: INFO: namespace: e2e-tests-kubectl-fflb9, resource: bindings, ignored listing per whitelist
Dec 29 11:37:40.421: INFO: namespace e2e-tests-kubectl-fflb9 deletion completed in 6.192078138s

• [SLOW TEST:6.587 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:37:40.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-9efece83-2a2f-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:37:40.772: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-nftnz" to be "success or failure"
Dec 29 11:37:40.776: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680341ms
Dec 29 11:37:42.785: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012871587s
Dec 29 11:37:44.802: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029726215s
Dec 29 11:37:47.059: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286158563s
Dec 29 11:37:49.072: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299939642s
Dec 29 11:37:51.084: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.311368247s
STEP: Saw pod success
Dec 29 11:37:51.084: INFO: Pod "pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:37:51.088: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 29 11:37:51.672: INFO: Waiting for pod pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005 to disappear
Dec 29 11:37:51.925: INFO: Pod pod-projected-secrets-9effa295-2a2f-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:37:51.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nftnz" for this suite.
Dec 29 11:37:58.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:37:58.255: INFO: namespace: e2e-tests-projected-nftnz, resource: bindings, ignored listing per whitelist
Dec 29 11:37:58.360: INFO: namespace e2e-tests-projected-nftnz deletion completed in 6.412231239s

• [SLOW TEST:17.938 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:37:58.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-a9a82ce6-2a2f-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:37:58.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-k4z4q" to be "success or failure"
Dec 29 11:37:58.894: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 216.573257ms
Dec 29 11:38:01.299: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621783201s
Dec 29 11:38:03.327: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649638034s
Dec 29 11:38:05.344: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666814018s
Dec 29 11:38:07.393: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.71605372s
Dec 29 11:38:09.419: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.741380596s
STEP: Saw pod success
Dec 29 11:38:09.419: INFO: Pod "pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:38:09.428: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 11:38:09.991: INFO: Waiting for pod pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005 to disappear
Dec 29 11:38:10.433: INFO: Pod pod-projected-secrets-a9ab356c-2a2f-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:38:10.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k4z4q" for this suite.
Dec 29 11:38:16.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:38:16.669: INFO: namespace: e2e-tests-projected-k4z4q, resource: bindings, ignored listing per whitelist
Dec 29 11:38:16.689: INFO: namespace e2e-tests-projected-k4z4q deletion completed in 6.225631715s

• [SLOW TEST:18.328 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:38:16.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-b489b2c9-2a2f-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 11:38:17.113: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-ccphm" to be "success or failure"
Dec 29 11:38:17.215: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.504584ms
Dec 29 11:38:19.235: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121678327s
Dec 29 11:38:21.260: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146249443s
Dec 29 11:38:23.321: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207893642s
Dec 29 11:38:25.340: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.226040907s
Dec 29 11:38:27.357: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.243872102s
Dec 29 11:38:29.383: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.269145808s
STEP: Saw pod success
Dec 29 11:38:29.383: INFO: Pod "pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:38:29.397: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 29 11:38:29.537: INFO: Waiting for pod pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005 to disappear
Dec 29 11:38:29.553: INFO: Pod pod-projected-secrets-b48afaff-2a2f-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:38:29.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ccphm" for this suite.
Dec 29 11:38:35.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:38:35.781: INFO: namespace: e2e-tests-projected-ccphm, resource: bindings, ignored listing per whitelist
Dec 29 11:38:35.883: INFO: namespace e2e-tests-projected-ccphm deletion completed in 6.248151677s

• [SLOW TEST:19.194 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:38:35.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:38:36.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xb4bf" for this suite.
Dec 29 11:39:00.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:39:00.773: INFO: namespace: e2e-tests-pods-xb4bf, resource: bindings, ignored listing per whitelist
Dec 29 11:39:00.827: INFO: namespace e2e-tests-pods-xb4bf deletion completed in 24.209344524s

• [SLOW TEST:24.943 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:39:00.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-cee49bd1-2a2f-11ea-9252-0242ac110005
STEP: Creating secret with name s-test-opt-upd-cee49c47-2a2f-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cee49bd1-2a2f-11ea-9252-0242ac110005
STEP: Updating secret s-test-opt-upd-cee49c47-2a2f-11ea-9252-0242ac110005
STEP: Creating secret with name s-test-opt-create-cee49c8d-2a2f-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:40:35.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-njhms" for this suite.
Dec 29 11:40:59.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:40:59.598: INFO: namespace: e2e-tests-projected-njhms, resource: bindings, ignored listing per whitelist
Dec 29 11:40:59.735: INFO: namespace e2e-tests-projected-njhms deletion completed in 24.271447016s

• [SLOW TEST:118.908 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:40:59.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-z4s98
I1229 11:40:59.923427       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-z4s98, replica count: 1
I1229 11:41:00.974339       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:01.974979       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:02.975680       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:03.977032       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:04.978136       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:05.978948       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:06.979591       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:07.980314       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:08.981114       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:41:09.981733       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 29 11:41:10.125: INFO: Created: latency-svc-kxsdr
Dec 29 11:41:10.310: INFO: Got endpoints: latency-svc-kxsdr [228.67322ms]
Dec 29 11:41:10.544: INFO: Created: latency-svc-h8qhc
Dec 29 11:41:10.610: INFO: Got endpoints: latency-svc-h8qhc [297.577919ms]
Dec 29 11:41:10.685: INFO: Created: latency-svc-sxq7j
Dec 29 11:41:10.760: INFO: Got endpoints: latency-svc-sxq7j [448.86652ms]
Dec 29 11:41:10.800: INFO: Created: latency-svc-zdzzv
Dec 29 11:41:10.843: INFO: Got endpoints: latency-svc-zdzzv [531.258065ms]
Dec 29 11:41:10.961: INFO: Created: latency-svc-tsbk6
Dec 29 11:41:11.010: INFO: Got endpoints: latency-svc-tsbk6 [698.56514ms]
Dec 29 11:41:11.020: INFO: Created: latency-svc-whhd5
Dec 29 11:41:11.160: INFO: Got endpoints: latency-svc-whhd5 [849.02644ms]
Dec 29 11:41:11.185: INFO: Created: latency-svc-dzzgm
Dec 29 11:41:11.208: INFO: Got endpoints: latency-svc-dzzgm [896.351842ms]
Dec 29 11:41:11.390: INFO: Created: latency-svc-7l5zb
Dec 29 11:41:11.415: INFO: Got endpoints: latency-svc-7l5zb [1.102380755s]
Dec 29 11:41:11.598: INFO: Created: latency-svc-6st9g
Dec 29 11:41:11.607: INFO: Got endpoints: latency-svc-6st9g [1.296007815s]
Dec 29 11:41:11.663: INFO: Created: latency-svc-x7ftm
Dec 29 11:41:11.674: INFO: Got endpoints: latency-svc-x7ftm [1.361867223s]
Dec 29 11:41:11.849: INFO: Created: latency-svc-5gsft
Dec 29 11:41:12.021: INFO: Got endpoints: latency-svc-5gsft [1.709509912s]
Dec 29 11:41:12.027: INFO: Created: latency-svc-l8kgl
Dec 29 11:41:12.065: INFO: Got endpoints: latency-svc-l8kgl [1.753244903s]
Dec 29 11:41:12.131: INFO: Created: latency-svc-w272p
Dec 29 11:41:12.284: INFO: Got endpoints: latency-svc-w272p [1.972126395s]
Dec 29 11:41:12.318: INFO: Created: latency-svc-cd6s8
Dec 29 11:41:12.324: INFO: Got endpoints: latency-svc-cd6s8 [2.011858185s]
Dec 29 11:41:12.378: INFO: Created: latency-svc-d5l7c
Dec 29 11:41:12.565: INFO: Got endpoints: latency-svc-d5l7c [2.252610325s]
Dec 29 11:41:12.728: INFO: Created: latency-svc-dzd47
Dec 29 11:41:12.766: INFO: Got endpoints: latency-svc-dzd47 [2.454254423s]
Dec 29 11:41:12.778: INFO: Created: latency-svc-9ndsb
Dec 29 11:41:12.785: INFO: Got endpoints: latency-svc-9ndsb [2.174166031s]
Dec 29 11:41:13.005: INFO: Created: latency-svc-rthr4
Dec 29 11:41:13.033: INFO: Got endpoints: latency-svc-rthr4 [2.271718635s]
Dec 29 11:41:13.039: INFO: Created: latency-svc-5rm2d
Dec 29 11:41:13.059: INFO: Got endpoints: latency-svc-5rm2d [2.215596594s]
Dec 29 11:41:13.230: INFO: Created: latency-svc-thsxj
Dec 29 11:41:13.231: INFO: Got endpoints: latency-svc-thsxj [2.220172822s]
Dec 29 11:41:13.429: INFO: Created: latency-svc-h596h
Dec 29 11:41:13.465: INFO: Got endpoints: latency-svc-h596h [2.304445272s]
Dec 29 11:41:13.678: INFO: Created: latency-svc-chq9h
Dec 29 11:41:13.710: INFO: Got endpoints: latency-svc-chq9h [2.502348274s]
Dec 29 11:41:13.882: INFO: Created: latency-svc-vgzp6
Dec 29 11:41:13.883: INFO: Got endpoints: latency-svc-vgzp6 [2.46853701s]
Dec 29 11:41:14.103: INFO: Created: latency-svc-7bl67
Dec 29 11:41:14.104: INFO: Got endpoints: latency-svc-7bl67 [2.496823242s]
Dec 29 11:41:14.389: INFO: Created: latency-svc-ftxc4
Dec 29 11:41:14.401: INFO: Got endpoints: latency-svc-ftxc4 [2.727197447s]
Dec 29 11:41:14.623: INFO: Created: latency-svc-mvll7
Dec 29 11:41:14.777: INFO: Got endpoints: latency-svc-mvll7 [2.756166051s]
Dec 29 11:41:14.802: INFO: Created: latency-svc-l8jkc
Dec 29 11:41:14.818: INFO: Got endpoints: latency-svc-l8jkc [2.751920992s]
Dec 29 11:41:15.024: INFO: Created: latency-svc-94xzv
Dec 29 11:41:15.047: INFO: Got endpoints: latency-svc-94xzv [2.762647466s]
Dec 29 11:41:15.131: INFO: Created: latency-svc-wjcvh
Dec 29 11:41:15.220: INFO: Got endpoints: latency-svc-wjcvh [2.89565957s]
Dec 29 11:41:15.237: INFO: Created: latency-svc-79kn4
Dec 29 11:41:15.257: INFO: Got endpoints: latency-svc-79kn4 [2.691476362s]
Dec 29 11:41:15.329: INFO: Created: latency-svc-4kxgh
Dec 29 11:41:15.462: INFO: Got endpoints: latency-svc-4kxgh [2.696244405s]
Dec 29 11:41:15.488: INFO: Created: latency-svc-g5xm5
Dec 29 11:41:15.657: INFO: Got endpoints: latency-svc-g5xm5 [2.872818337s]
Dec 29 11:41:15.665: INFO: Created: latency-svc-ns9n2
Dec 29 11:41:15.685: INFO: Got endpoints: latency-svc-ns9n2 [2.652069498s]
Dec 29 11:41:15.751: INFO: Created: latency-svc-n6hjr
Dec 29 11:41:15.866: INFO: Got endpoints: latency-svc-n6hjr [2.806927123s]
Dec 29 11:41:15.904: INFO: Created: latency-svc-l4zj6
Dec 29 11:41:15.960: INFO: Created: latency-svc-kk49b
Dec 29 11:41:15.963: INFO: Got endpoints: latency-svc-l4zj6 [2.732768282s]
Dec 29 11:41:15.978: INFO: Got endpoints: latency-svc-kk49b [2.512711833s]
Dec 29 11:41:16.150: INFO: Created: latency-svc-drdv7
Dec 29 11:41:16.179: INFO: Got endpoints: latency-svc-drdv7 [2.468385576s]
Dec 29 11:41:16.461: INFO: Created: latency-svc-pmhq5
Dec 29 11:41:16.482: INFO: Got endpoints: latency-svc-pmhq5 [2.597874852s]
Dec 29 11:41:16.748: INFO: Created: latency-svc-zbzq6
Dec 29 11:41:16.789: INFO: Got endpoints: latency-svc-zbzq6 [2.685166225s]
Dec 29 11:41:16.867: INFO: Created: latency-svc-jz7rf
Dec 29 11:41:17.045: INFO: Got endpoints: latency-svc-jz7rf [2.643402661s]
Dec 29 11:41:17.085: INFO: Created: latency-svc-5btr4
Dec 29 11:41:17.122: INFO: Got endpoints: latency-svc-5btr4 [2.34472657s]
Dec 29 11:41:17.285: INFO: Created: latency-svc-l9zlc
Dec 29 11:41:17.329: INFO: Got endpoints: latency-svc-l9zlc [2.511539916s]
Dec 29 11:41:17.351: INFO: Created: latency-svc-fdpm7
Dec 29 11:41:17.465: INFO: Got endpoints: latency-svc-fdpm7 [2.41738862s]
Dec 29 11:41:17.501: INFO: Created: latency-svc-6mqr9
Dec 29 11:41:17.543: INFO: Got endpoints: latency-svc-6mqr9 [2.322586047s]
Dec 29 11:41:17.683: INFO: Created: latency-svc-gn4h6
Dec 29 11:41:17.732: INFO: Got endpoints: latency-svc-gn4h6 [2.47495289s]
Dec 29 11:41:17.858: INFO: Created: latency-svc-s8l2t
Dec 29 11:41:17.901: INFO: Got endpoints: latency-svc-s8l2t [2.438229234s]
Dec 29 11:41:17.959: INFO: Created: latency-svc-c666z
Dec 29 11:41:18.088: INFO: Got endpoints: latency-svc-c666z [2.4302195s]
Dec 29 11:41:18.119: INFO: Created: latency-svc-7mbdk
Dec 29 11:41:18.140: INFO: Got endpoints: latency-svc-7mbdk [2.455247767s]
Dec 29 11:41:18.313: INFO: Created: latency-svc-zhjmb
Dec 29 11:41:18.347: INFO: Got endpoints: latency-svc-zhjmb [2.480096993s]
Dec 29 11:41:18.560: INFO: Created: latency-svc-4hddk
Dec 29 11:41:18.613: INFO: Got endpoints: latency-svc-4hddk [2.649575508s]
Dec 29 11:41:18.739: INFO: Created: latency-svc-254ch
Dec 29 11:41:18.750: INFO: Got endpoints: latency-svc-254ch [2.771339282s]
Dec 29 11:41:18.805: INFO: Created: latency-svc-mvc4n
Dec 29 11:41:18.900: INFO: Got endpoints: latency-svc-mvc4n [2.720806433s]
Dec 29 11:41:18.925: INFO: Created: latency-svc-wqdqf
Dec 29 11:41:18.957: INFO: Got endpoints: latency-svc-wqdqf [2.475434149s]
Dec 29 11:41:19.076: INFO: Created: latency-svc-dxjzp
Dec 29 11:41:19.138: INFO: Got endpoints: latency-svc-dxjzp [2.348317468s]
Dec 29 11:41:19.154: INFO: Created: latency-svc-2dbr9
Dec 29 11:41:19.186: INFO: Got endpoints: latency-svc-2dbr9 [2.141218874s]
Dec 29 11:41:19.380: INFO: Created: latency-svc-t2kg4
Dec 29 11:41:19.394: INFO: Got endpoints: latency-svc-t2kg4 [2.270639755s]
Dec 29 11:41:19.457: INFO: Created: latency-svc-wps4w
Dec 29 11:41:19.572: INFO: Got endpoints: latency-svc-wps4w [2.242089188s]
Dec 29 11:41:19.645: INFO: Created: latency-svc-5l9b9
Dec 29 11:41:19.772: INFO: Got endpoints: latency-svc-5l9b9 [2.306869605s]
Dec 29 11:41:19.801: INFO: Created: latency-svc-98snt
Dec 29 11:41:19.818: INFO: Got endpoints: latency-svc-98snt [2.274932008s]
Dec 29 11:41:19.885: INFO: Created: latency-svc-p4m9b
Dec 29 11:41:19.940: INFO: Got endpoints: latency-svc-p4m9b [2.207555671s]
Dec 29 11:41:19.958: INFO: Created: latency-svc-5dmrx
Dec 29 11:41:19.965: INFO: Got endpoints: latency-svc-5dmrx [2.06347786s]
Dec 29 11:41:20.012: INFO: Created: latency-svc-64fpk
Dec 29 11:41:20.031: INFO: Got endpoints: latency-svc-64fpk [1.942302144s]
Dec 29 11:41:20.175: INFO: Created: latency-svc-wjth4
Dec 29 11:41:20.268: INFO: Created: latency-svc-kw2mm
Dec 29 11:41:20.420: INFO: Got endpoints: latency-svc-wjth4 [2.279039993s]
Dec 29 11:41:20.475: INFO: Created: latency-svc-pft7x
Dec 29 11:41:20.489: INFO: Got endpoints: latency-svc-pft7x [1.875694356s]
Dec 29 11:41:20.646: INFO: Got endpoints: latency-svc-kw2mm [2.298323769s]
Dec 29 11:41:20.696: INFO: Created: latency-svc-67cpr
Dec 29 11:41:20.865: INFO: Got endpoints: latency-svc-67cpr [444.666356ms]
Dec 29 11:41:20.903: INFO: Created: latency-svc-k7l8r
Dec 29 11:41:20.924: INFO: Got endpoints: latency-svc-k7l8r [2.174640928s]
Dec 29 11:41:21.054: INFO: Created: latency-svc-x8jhn
Dec 29 11:41:21.072: INFO: Got endpoints: latency-svc-x8jhn [2.171017503s]
Dec 29 11:41:21.107: INFO: Created: latency-svc-t6qlc
Dec 29 11:41:21.126: INFO: Got endpoints: latency-svc-t6qlc [2.167919859s]
Dec 29 11:41:21.367: INFO: Created: latency-svc-4t7dh
Dec 29 11:41:21.395: INFO: Got endpoints: latency-svc-4t7dh [2.256658634s]
Dec 29 11:41:21.747: INFO: Created: latency-svc-qmvz9
Dec 29 11:41:21.758: INFO: Got endpoints: latency-svc-qmvz9 [2.571538495s]
Dec 29 11:41:21.977: INFO: Created: latency-svc-w55kh
Dec 29 11:41:22.030: INFO: Got endpoints: latency-svc-w55kh [2.636386352s]
Dec 29 11:41:22.191: INFO: Created: latency-svc-8wthz
Dec 29 11:41:22.252: INFO: Got endpoints: latency-svc-8wthz [2.679799785s]
Dec 29 11:41:22.294: INFO: Created: latency-svc-5hc2w
Dec 29 11:41:22.477: INFO: Got endpoints: latency-svc-5hc2w [2.705035356s]
Dec 29 11:41:22.505: INFO: Created: latency-svc-885k6
Dec 29 11:41:22.737: INFO: Got endpoints: latency-svc-885k6 [2.91890224s]
Dec 29 11:41:22.830: INFO: Created: latency-svc-b9642
Dec 29 11:41:23.010: INFO: Got endpoints: latency-svc-b9642 [3.069487075s]
Dec 29 11:41:23.013: INFO: Created: latency-svc-wcwfw
Dec 29 11:41:23.013: INFO: Got endpoints: latency-svc-wcwfw [3.047445502s]
Dec 29 11:41:23.062: INFO: Created: latency-svc-76hjg
Dec 29 11:41:23.190: INFO: Got endpoints: latency-svc-76hjg [3.158948931s]
Dec 29 11:41:23.224: INFO: Created: latency-svc-8b2m7
Dec 29 11:41:23.273: INFO: Got endpoints: latency-svc-8b2m7 [2.783363774s]
Dec 29 11:41:23.346: INFO: Created: latency-svc-rspj9
Dec 29 11:41:23.388: INFO: Got endpoints: latency-svc-rspj9 [2.741634769s]
Dec 29 11:41:23.407: INFO: Created: latency-svc-s9ctj
Dec 29 11:41:23.420: INFO: Got endpoints: latency-svc-s9ctj [2.555285461s]
Dec 29 11:41:23.575: INFO: Created: latency-svc-db68w
Dec 29 11:41:23.590: INFO: Got endpoints: latency-svc-db68w [2.664968237s]
Dec 29 11:41:23.735: INFO: Created: latency-svc-sfmh6
Dec 29 11:41:23.751: INFO: Got endpoints: latency-svc-sfmh6 [2.679123401s]
Dec 29 11:41:23.913: INFO: Created: latency-svc-jhbmq
Dec 29 11:41:23.993: INFO: Got endpoints: latency-svc-jhbmq [2.867420359s]
Dec 29 11:41:24.130: INFO: Created: latency-svc-dzrpx
Dec 29 11:41:24.156: INFO: Got endpoints: latency-svc-dzrpx [2.760532369s]
Dec 29 11:41:24.354: INFO: Created: latency-svc-wf6kw
Dec 29 11:41:24.361: INFO: Got endpoints: latency-svc-wf6kw [2.603024841s]
Dec 29 11:41:24.443: INFO: Created: latency-svc-ntznb
Dec 29 11:41:24.580: INFO: Got endpoints: latency-svc-ntznb [2.549881947s]
Dec 29 11:41:24.601: INFO: Created: latency-svc-m86g9
Dec 29 11:41:24.615: INFO: Got endpoints: latency-svc-m86g9 [2.362551628s]
Dec 29 11:41:24.741: INFO: Created: latency-svc-gmstk
Dec 29 11:41:24.823: INFO: Got endpoints: latency-svc-gmstk [2.345857979s]
Dec 29 11:41:24.827: INFO: Created: latency-svc-xf67f
Dec 29 11:41:24.828: INFO: Got endpoints: latency-svc-xf67f [2.089821235s]
Dec 29 11:41:24.961: INFO: Created: latency-svc-c4srk
Dec 29 11:41:24.987: INFO: Got endpoints: latency-svc-c4srk [1.974379194s]
Dec 29 11:41:25.019: INFO: Created: latency-svc-f4fzv
Dec 29 11:41:25.150: INFO: Got endpoints: latency-svc-f4fzv [2.138921543s]
Dec 29 11:41:25.169: INFO: Created: latency-svc-ljpcf
Dec 29 11:41:25.219: INFO: Got endpoints: latency-svc-ljpcf [2.028491498s]
Dec 29 11:41:25.385: INFO: Created: latency-svc-kkgz4
Dec 29 11:41:25.393: INFO: Got endpoints: latency-svc-kkgz4 [2.119461595s]
Dec 29 11:41:25.463: INFO: Created: latency-svc-d45kr
Dec 29 11:41:25.584: INFO: Got endpoints: latency-svc-d45kr [2.196008335s]
Dec 29 11:41:25.632: INFO: Created: latency-svc-wlxmm
Dec 29 11:41:25.643: INFO: Got endpoints: latency-svc-wlxmm [2.222411455s]
Dec 29 11:41:25.806: INFO: Created: latency-svc-nmh6w
Dec 29 11:41:25.852: INFO: Created: latency-svc-4g9xr
Dec 29 11:41:25.864: INFO: Got endpoints: latency-svc-nmh6w [2.274322819s]
Dec 29 11:41:25.995: INFO: Got endpoints: latency-svc-4g9xr [2.2440211s]
Dec 29 11:41:26.064: INFO: Created: latency-svc-jwzpw
Dec 29 11:41:26.124: INFO: Got endpoints: latency-svc-jwzpw [2.130347487s]
Dec 29 11:41:26.804: INFO: Created: latency-svc-5nz5l
Dec 29 11:41:26.804: INFO: Got endpoints: latency-svc-5nz5l [2.648346438s]
Dec 29 11:41:27.193: INFO: Created: latency-svc-wxx56
Dec 29 11:41:27.199: INFO: Got endpoints: latency-svc-wxx56 [2.837081261s]
Dec 29 11:41:27.363: INFO: Created: latency-svc-jjqrm
Dec 29 11:41:27.374: INFO: Got endpoints: latency-svc-jjqrm [2.793551938s]
Dec 29 11:41:27.436: INFO: Created: latency-svc-k8fc7
Dec 29 11:41:27.575: INFO: Got endpoints: latency-svc-k8fc7 [2.9595148s]
Dec 29 11:41:27.624: INFO: Created: latency-svc-crqvb
Dec 29 11:41:27.643: INFO: Got endpoints: latency-svc-crqvb [2.818950974s]
Dec 29 11:41:27.859: INFO: Created: latency-svc-lnqc4
Dec 29 11:41:27.874: INFO: Got endpoints: latency-svc-lnqc4 [3.046097742s]
Dec 29 11:41:28.054: INFO: Created: latency-svc-djgch
Dec 29 11:41:28.062: INFO: Got endpoints: latency-svc-djgch [3.074441399s]
Dec 29 11:41:28.126: INFO: Created: latency-svc-j5hd4
Dec 29 11:41:28.245: INFO: Got endpoints: latency-svc-j5hd4 [3.094932706s]
Dec 29 11:41:28.485: INFO: Created: latency-svc-m25s7
Dec 29 11:41:28.498: INFO: Got endpoints: latency-svc-m25s7 [3.278369802s]
Dec 29 11:41:28.551: INFO: Created: latency-svc-vrx5l
Dec 29 11:41:28.715: INFO: Got endpoints: latency-svc-vrx5l [3.322153986s]
Dec 29 11:41:28.766: INFO: Created: latency-svc-pzz4q
Dec 29 11:41:28.914: INFO: Created: latency-svc-5nzsz
Dec 29 11:41:28.920: INFO: Got endpoints: latency-svc-pzz4q [3.335580232s]
Dec 29 11:41:28.946: INFO: Got endpoints: latency-svc-5nzsz [3.302575658s]
Dec 29 11:41:29.006: INFO: Created: latency-svc-l9llb
Dec 29 11:41:29.092: INFO: Got endpoints: latency-svc-l9llb [3.227495141s]
Dec 29 11:41:29.113: INFO: Created: latency-svc-825sk
Dec 29 11:41:29.123: INFO: Got endpoints: latency-svc-825sk [3.127593322s]
Dec 29 11:41:29.192: INFO: Created: latency-svc-6hb5c
Dec 29 11:41:29.337: INFO: Got endpoints: latency-svc-6hb5c [3.212856846s]
Dec 29 11:41:29.391: INFO: Created: latency-svc-s8dcv
Dec 29 11:41:29.423: INFO: Got endpoints: latency-svc-s8dcv [2.618288803s]
Dec 29 11:41:29.610: INFO: Created: latency-svc-b64vd
Dec 29 11:41:29.769: INFO: Got endpoints: latency-svc-b64vd [2.57055694s]
Dec 29 11:41:29.795: INFO: Created: latency-svc-tbwq2
Dec 29 11:41:29.795: INFO: Got endpoints: latency-svc-tbwq2 [2.420385855s]
Dec 29 11:41:29.832: INFO: Created: latency-svc-rjzls
Dec 29 11:41:29.969: INFO: Created: latency-svc-p59qv
Dec 29 11:41:29.980: INFO: Got endpoints: latency-svc-rjzls [2.404732962s]
Dec 29 11:41:29.984: INFO: Got endpoints: latency-svc-p59qv [2.341284152s]
Dec 29 11:41:30.126: INFO: Created: latency-svc-cp6q5
Dec 29 11:41:30.130: INFO: Got endpoints: latency-svc-cp6q5 [2.256045069s]
Dec 29 11:41:30.200: INFO: Created: latency-svc-w4fk7
Dec 29 11:41:30.421: INFO: Got endpoints: latency-svc-w4fk7 [2.359038367s]
Dec 29 11:41:30.421: INFO: Created: latency-svc-vwmfg
Dec 29 11:41:30.473: INFO: Got endpoints: latency-svc-vwmfg [2.228253203s]
Dec 29 11:41:30.618: INFO: Created: latency-svc-mhddm
Dec 29 11:41:30.701: INFO: Created: latency-svc-sxkwn
Dec 29 11:41:30.818: INFO: Got endpoints: latency-svc-sxkwn [2.101774849s]
Dec 29 11:41:30.830: INFO: Got endpoints: latency-svc-mhddm [2.332047384s]
Dec 29 11:41:30.892: INFO: Created: latency-svc-skcvb
Dec 29 11:41:30.965: INFO: Got endpoints: latency-svc-skcvb [2.04455074s]
Dec 29 11:41:30.996: INFO: Created: latency-svc-z6w2r
Dec 29 11:41:31.009: INFO: Got endpoints: latency-svc-z6w2r [2.062530071s]
Dec 29 11:41:31.158: INFO: Created: latency-svc-cztzc
Dec 29 11:41:31.172: INFO: Got endpoints: latency-svc-cztzc [2.080135718s]
Dec 29 11:41:31.253: INFO: Created: latency-svc-cxwwc
Dec 29 11:41:31.253: INFO: Got endpoints: latency-svc-cxwwc [2.129923856s]
Dec 29 11:41:31.391: INFO: Created: latency-svc-bwxzh
Dec 29 11:41:31.462: INFO: Got endpoints: latency-svc-bwxzh [2.124919765s]
Dec 29 11:41:31.466: INFO: Created: latency-svc-jc5qs
Dec 29 11:41:31.579: INFO: Got endpoints: latency-svc-jc5qs [2.156074145s]
Dec 29 11:41:31.616: INFO: Created: latency-svc-pmnjv
Dec 29 11:41:31.624: INFO: Got endpoints: latency-svc-pmnjv [1.854511429s]
Dec 29 11:41:31.781: INFO: Created: latency-svc-jjnbn
Dec 29 11:41:31.804: INFO: Got endpoints: latency-svc-jjnbn [2.009181143s]
Dec 29 11:41:31.870: INFO: Created: latency-svc-cjj9r
Dec 29 11:41:31.975: INFO: Got endpoints: latency-svc-cjj9r [1.99546469s]
Dec 29 11:41:31.987: INFO: Created: latency-svc-qg7nt
Dec 29 11:41:32.007: INFO: Got endpoints: latency-svc-qg7nt [2.022567146s]
Dec 29 11:41:32.052: INFO: Created: latency-svc-k7x45
Dec 29 11:41:32.203: INFO: Got endpoints: latency-svc-k7x45 [2.07253406s]
Dec 29 11:41:32.236: INFO: Created: latency-svc-sxqx6
Dec 29 11:41:32.267: INFO: Got endpoints: latency-svc-sxqx6 [1.846215946s]
Dec 29 11:41:32.421: INFO: Created: latency-svc-vbf47
Dec 29 11:41:32.432: INFO: Got endpoints: latency-svc-vbf47 [1.958048344s]
Dec 29 11:41:32.504: INFO: Created: latency-svc-tvbzg
Dec 29 11:41:32.654: INFO: Got endpoints: latency-svc-tvbzg [1.836403296s]
Dec 29 11:41:32.670: INFO: Created: latency-svc-fzgxr
Dec 29 11:41:32.734: INFO: Created: latency-svc-f7nl2
Dec 29 11:41:32.747: INFO: Got endpoints: latency-svc-fzgxr [1.91682791s]
Dec 29 11:41:32.932: INFO: Got endpoints: latency-svc-f7nl2 [1.96633772s]
Dec 29 11:41:33.014: INFO: Created: latency-svc-zv4k4
Dec 29 11:41:33.158: INFO: Got endpoints: latency-svc-zv4k4 [2.149276919s]
Dec 29 11:41:33.253: INFO: Created: latency-svc-jpr6q
Dec 29 11:41:33.366: INFO: Got endpoints: latency-svc-jpr6q [2.193135743s]
Dec 29 11:41:33.389: INFO: Created: latency-svc-nj6tg
Dec 29 11:41:33.422: INFO: Got endpoints: latency-svc-nj6tg [2.169357085s]
Dec 29 11:41:33.581: INFO: Created: latency-svc-xlxdv
Dec 29 11:41:33.606: INFO: Got endpoints: latency-svc-xlxdv [2.143657396s]
Dec 29 11:41:33.804: INFO: Created: latency-svc-clk6k
Dec 29 11:41:33.831: INFO: Got endpoints: latency-svc-clk6k [2.251716847s]
Dec 29 11:41:33.883: INFO: Created: latency-svc-69p8r
Dec 29 11:41:33.973: INFO: Got endpoints: latency-svc-69p8r [2.348411089s]
Dec 29 11:41:34.021: INFO: Created: latency-svc-9k6cr
Dec 29 11:41:34.036: INFO: Got endpoints: latency-svc-9k6cr [2.23128988s]
Dec 29 11:41:34.218: INFO: Created: latency-svc-sxt2z
Dec 29 11:41:34.218: INFO: Got endpoints: latency-svc-sxt2z [2.242566035s]
Dec 29 11:41:34.481: INFO: Created: latency-svc-scp9n
Dec 29 11:41:34.525: INFO: Got endpoints: latency-svc-scp9n [2.51835186s]
Dec 29 11:41:34.547: INFO: Created: latency-svc-bs8vb
Dec 29 11:41:34.674: INFO: Got endpoints: latency-svc-bs8vb [2.470778257s]
Dec 29 11:41:34.716: INFO: Created: latency-svc-xf8bf
Dec 29 11:41:34.776: INFO: Got endpoints: latency-svc-xf8bf [2.508712046s]
Dec 29 11:41:34.920: INFO: Created: latency-svc-mfz68
Dec 29 11:41:34.939: INFO: Got endpoints: latency-svc-mfz68 [2.507055385s]
Dec 29 11:41:35.001: INFO: Created: latency-svc-lsc4t
Dec 29 11:41:35.128: INFO: Got endpoints: latency-svc-lsc4t [2.473837146s]
Dec 29 11:41:35.153: INFO: Created: latency-svc-fvhr2
Dec 29 11:41:35.176: INFO: Got endpoints: latency-svc-fvhr2 [2.428909278s]
Dec 29 11:41:35.370: INFO: Created: latency-svc-jn9zr
Dec 29 11:41:35.387: INFO: Got endpoints: latency-svc-jn9zr [2.454606944s]
Dec 29 11:41:35.440: INFO: Created: latency-svc-dnvdv
Dec 29 11:41:35.568: INFO: Got endpoints: latency-svc-dnvdv [2.409238675s]
Dec 29 11:41:35.600: INFO: Created: latency-svc-ffjgq
Dec 29 11:41:35.605: INFO: Got endpoints: latency-svc-ffjgq [2.238430171s]
Dec 29 11:41:35.659: INFO: Created: latency-svc-pwj42
Dec 29 11:41:35.750: INFO: Got endpoints: latency-svc-pwj42 [2.327926842s]
Dec 29 11:41:35.770: INFO: Created: latency-svc-9ck2z
Dec 29 11:41:35.810: INFO: Got endpoints: latency-svc-9ck2z [2.202833037s]
Dec 29 11:41:35.848: INFO: Created: latency-svc-zksl9
Dec 29 11:41:35.964: INFO: Got endpoints: latency-svc-zksl9 [2.132611025s]
Dec 29 11:41:35.989: INFO: Created: latency-svc-bxxht
Dec 29 11:41:36.006: INFO: Got endpoints: latency-svc-bxxht [2.033598334s]
Dec 29 11:41:36.054: INFO: Created: latency-svc-5zlk5
Dec 29 11:41:36.150: INFO: Got endpoints: latency-svc-5zlk5 [2.114280668s]
Dec 29 11:41:36.180: INFO: Created: latency-svc-sklsx
Dec 29 11:41:36.190: INFO: Got endpoints: latency-svc-sklsx [1.972072774s]
Dec 29 11:41:36.473: INFO: Created: latency-svc-b7hsc
Dec 29 11:41:36.648: INFO: Got endpoints: latency-svc-b7hsc [2.122950294s]
Dec 29 11:41:36.669: INFO: Created: latency-svc-45fl2
Dec 29 11:41:36.688: INFO: Got endpoints: latency-svc-45fl2 [2.014091339s]
Dec 29 11:41:36.857: INFO: Created: latency-svc-82ptd
Dec 29 11:41:36.878: INFO: Got endpoints: latency-svc-82ptd [2.101188315s]
Dec 29 11:41:37.007: INFO: Created: latency-svc-xfdmb
Dec 29 11:41:37.169: INFO: Got endpoints: latency-svc-xfdmb [2.229761324s]
Dec 29 11:41:37.190: INFO: Created: latency-svc-65l7f
Dec 29 11:41:37.199: INFO: Got endpoints: latency-svc-65l7f [2.070612236s]
Dec 29 11:41:37.254: INFO: Created: latency-svc-788xc
Dec 29 11:41:37.343: INFO: Got endpoints: latency-svc-788xc [2.166929845s]
Dec 29 11:41:37.358: INFO: Created: latency-svc-n2x5w
Dec 29 11:41:37.383: INFO: Got endpoints: latency-svc-n2x5w [1.996025726s]
Dec 29 11:41:37.422: INFO: Created: latency-svc-dltlr
Dec 29 11:41:37.504: INFO: Got endpoints: latency-svc-dltlr [1.936182545s]
Dec 29 11:41:37.559: INFO: Created: latency-svc-hjdgn
Dec 29 11:41:37.570: INFO: Got endpoints: latency-svc-hjdgn [1.965794192s]
Dec 29 11:41:37.734: INFO: Created: latency-svc-48v8j
Dec 29 11:41:37.769: INFO: Got endpoints: latency-svc-48v8j [2.017882932s]
Dec 29 11:41:38.772: INFO: Created: latency-svc-26rsp
Dec 29 11:41:38.817: INFO: Got endpoints: latency-svc-26rsp [3.007071414s]
Dec 29 11:41:38.937: INFO: Created: latency-svc-f5m72
Dec 29 11:41:39.006: INFO: Got endpoints: latency-svc-f5m72 [3.040944957s]
Dec 29 11:41:39.022: INFO: Created: latency-svc-skv2c
Dec 29 11:41:39.105: INFO: Got endpoints: latency-svc-skv2c [3.098023535s]
Dec 29 11:41:39.135: INFO: Created: latency-svc-54q5h
Dec 29 11:41:39.143: INFO: Got endpoints: latency-svc-54q5h [2.992187425s]
Dec 29 11:41:39.217: INFO: Created: latency-svc-4lm4w
Dec 29 11:41:39.343: INFO: Got endpoints: latency-svc-4lm4w [3.152568594s]
Dec 29 11:41:39.357: INFO: Created: latency-svc-cmms4
Dec 29 11:41:39.380: INFO: Got endpoints: latency-svc-cmms4 [2.731184488s]
Dec 29 11:41:39.520: INFO: Created: latency-svc-dqplh
Dec 29 11:41:39.531: INFO: Got endpoints: latency-svc-dqplh [2.842060937s]
Dec 29 11:41:39.768: INFO: Created: latency-svc-q965j
Dec 29 11:41:39.788: INFO: Got endpoints: latency-svc-q965j [2.909506317s]
Dec 29 11:41:39.949: INFO: Created: latency-svc-z2xkc
Dec 29 11:41:39.964: INFO: Got endpoints: latency-svc-z2xkc [2.794845221s]
Dec 29 11:41:40.012: INFO: Created: latency-svc-6nhvx
Dec 29 11:41:40.107: INFO: Got endpoints: latency-svc-6nhvx [2.907346818s]
Dec 29 11:41:40.132: INFO: Created: latency-svc-ccdxk
Dec 29 11:41:40.151: INFO: Got endpoints: latency-svc-ccdxk [2.807281126s]
Dec 29 11:41:40.187: INFO: Created: latency-svc-469l9
Dec 29 11:41:40.401: INFO: Got endpoints: latency-svc-469l9 [3.017768536s]
Dec 29 11:41:40.432: INFO: Created: latency-svc-rksnk
Dec 29 11:41:40.446: INFO: Got endpoints: latency-svc-rksnk [2.941847145s]
Dec 29 11:41:40.641: INFO: Created: latency-svc-kmn9f
Dec 29 11:41:40.677: INFO: Got endpoints: latency-svc-kmn9f [3.106427574s]
Dec 29 11:41:40.798: INFO: Created: latency-svc-6dz8s
Dec 29 11:41:41.016: INFO: Created: latency-svc-w758s
Dec 29 11:41:41.045: INFO: Got endpoints: latency-svc-6dz8s [3.275810956s]
Dec 29 11:41:41.059: INFO: Got endpoints: latency-svc-w758s [2.241910113s]
Dec 29 11:41:41.103: INFO: Created: latency-svc-748tl
Dec 29 11:41:41.221: INFO: Got endpoints: latency-svc-748tl [2.214513292s]
Dec 29 11:41:41.273: INFO: Created: latency-svc-84shx
Dec 29 11:41:41.298: INFO: Got endpoints: latency-svc-84shx [2.192548106s]
Dec 29 11:41:41.403: INFO: Created: latency-svc-wvxdl
Dec 29 11:41:41.440: INFO: Got endpoints: latency-svc-wvxdl [2.296981117s]
Dec 29 11:41:41.486: INFO: Created: latency-svc-wdjpj
Dec 29 11:41:41.564: INFO: Got endpoints: latency-svc-wdjpj [2.220225717s]
Dec 29 11:41:41.620: INFO: Created: latency-svc-vbhzg
Dec 29 11:41:41.662: INFO: Got endpoints: latency-svc-vbhzg [2.281642015s]
Dec 29 11:41:42.036: INFO: Created: latency-svc-26t98
Dec 29 11:41:42.043: INFO: Got endpoints: latency-svc-26t98 [2.512000344s]
Dec 29 11:41:42.099: INFO: Created: latency-svc-pzb2k
Dec 29 11:41:42.244: INFO: Got endpoints: latency-svc-pzb2k [2.455623487s]
Dec 29 11:41:42.258: INFO: Created: latency-svc-ngnzk
Dec 29 11:41:42.499: INFO: Got endpoints: latency-svc-ngnzk [2.534388152s]
Dec 29 11:41:42.646: INFO: Created: latency-svc-ff2z6
Dec 29 11:41:42.811: INFO: Created: latency-svc-vrkc6
Dec 29 11:41:42.821: INFO: Got endpoints: latency-svc-ff2z6 [2.713845687s]
Dec 29 11:41:42.859: INFO: Got endpoints: latency-svc-vrkc6 [2.70807036s]
Dec 29 11:41:43.005: INFO: Created: latency-svc-d226f
Dec 29 11:41:43.179: INFO: Got endpoints: latency-svc-d226f [2.777522889s]
Dec 29 11:41:43.196: INFO: Created: latency-svc-hckq2
Dec 29 11:41:43.198: INFO: Got endpoints: latency-svc-hckq2 [2.751569909s]
Dec 29 11:41:43.198: INFO: Latencies: [297.577919ms 444.666356ms 448.86652ms 531.258065ms 698.56514ms 849.02644ms 896.351842ms 1.102380755s 1.296007815s 1.361867223s 1.709509912s 1.753244903s 1.836403296s 1.846215946s 1.854511429s 1.875694356s 1.91682791s 1.936182545s 1.942302144s 1.958048344s 1.965794192s 1.96633772s 1.972072774s 1.972126395s 1.974379194s 1.99546469s 1.996025726s 2.009181143s 2.011858185s 2.014091339s 2.017882932s 2.022567146s 2.028491498s 2.033598334s 2.04455074s 2.062530071s 2.06347786s 2.070612236s 2.07253406s 2.080135718s 2.089821235s 2.101188315s 2.101774849s 2.114280668s 2.119461595s 2.122950294s 2.124919765s 2.129923856s 2.130347487s 2.132611025s 2.138921543s 2.141218874s 2.143657396s 2.149276919s 2.156074145s 2.166929845s 2.167919859s 2.169357085s 2.171017503s 2.174166031s 2.174640928s 2.192548106s 2.193135743s 2.196008335s 2.202833037s 2.207555671s 2.214513292s 2.215596594s 2.220172822s 2.220225717s 2.222411455s 2.228253203s 2.229761324s 2.23128988s 2.238430171s 2.241910113s 2.242089188s 2.242566035s 2.2440211s 2.251716847s 2.252610325s 2.256045069s 2.256658634s 2.270639755s 2.271718635s 2.274322819s 2.274932008s 2.279039993s 2.281642015s 2.296981117s 2.298323769s 2.304445272s 2.306869605s 2.322586047s 2.327926842s 2.332047384s 2.341284152s 2.34472657s 2.345857979s 2.348317468s 2.348411089s 2.359038367s 2.362551628s 2.404732962s 2.409238675s 2.41738862s 2.420385855s 2.428909278s 2.4302195s 2.438229234s 2.454254423s 2.454606944s 2.455247767s 2.455623487s 2.468385576s 2.46853701s 2.470778257s 2.473837146s 2.47495289s 2.475434149s 2.480096993s 2.496823242s 2.502348274s 2.507055385s 2.508712046s 2.511539916s 2.512000344s 2.512711833s 2.51835186s 2.534388152s 2.549881947s 2.555285461s 2.57055694s 2.571538495s 2.597874852s 2.603024841s 2.618288803s 2.636386352s 2.643402661s 2.648346438s 2.649575508s 2.652069498s 2.664968237s 2.679123401s 2.679799785s 2.685166225s 2.691476362s 2.696244405s 2.705035356s 2.70807036s 2.713845687s 2.720806433s 2.727197447s 2.731184488s 2.732768282s 2.741634769s 2.751569909s 2.751920992s 2.756166051s 2.760532369s 2.762647466s 2.771339282s 2.777522889s 2.783363774s 2.793551938s 2.794845221s 2.806927123s 2.807281126s 2.818950974s 2.837081261s 2.842060937s 2.867420359s 2.872818337s 2.89565957s 2.907346818s 2.909506317s 2.91890224s 2.941847145s 2.9595148s 2.992187425s 3.007071414s 3.017768536s 3.040944957s 3.046097742s 3.047445502s 3.069487075s 3.074441399s 3.094932706s 3.098023535s 3.106427574s 3.127593322s 3.152568594s 3.158948931s 3.212856846s 3.227495141s 3.275810956s 3.278369802s 3.302575658s 3.322153986s 3.335580232s]
Dec 29 11:41:43.199: INFO: 50 %ile: 2.348411089s
Dec 29 11:41:43.199: INFO: 90 %ile: 3.007071414s
Dec 29 11:41:43.199: INFO: 99 %ile: 3.322153986s
Dec 29 11:41:43.199: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:41:43.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-z4s98" for this suite.
Dec 29 11:42:35.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:42:35.653: INFO: namespace: e2e-tests-svc-latency-z4s98, resource: bindings, ignored listing per whitelist
Dec 29 11:42:35.744: INFO: namespace e2e-tests-svc-latency-z4s98 deletion completed in 52.529029372s

• [SLOW TEST:96.008 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:42:35.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 29 11:42:35.871: INFO: Waiting up to 5m0s for pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-x85pn" to be "success or failure"
Dec 29 11:42:35.877: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641116ms
Dec 29 11:42:37.892: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020892947s
Dec 29 11:42:39.907: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036797605s
Dec 29 11:42:42.248: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377854767s
Dec 29 11:42:44.259: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388533484s
Dec 29 11:42:46.281: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.410533189s
STEP: Saw pod success
Dec 29 11:42:46.281: INFO: Pod "pod-4ee42d01-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:42:46.289: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ee42d01-2a30-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:42:46.730: INFO: Waiting for pod pod-4ee42d01-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:42:46.993: INFO: Pod pod-4ee42d01-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:42:46.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x85pn" for this suite.
Dec 29 11:42:53.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:42:53.391: INFO: namespace: e2e-tests-emptydir-x85pn, resource: bindings, ignored listing per whitelist
Dec 29 11:42:53.412: INFO: namespace e2e-tests-emptydir-x85pn deletion completed in 6.407626096s

• [SLOW TEST:17.668 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:42:53.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 29 11:42:53.600: INFO: Waiting up to 5m0s for pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-containers-knqln" to be "success or failure"
Dec 29 11:42:53.626: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.031124ms
Dec 29 11:42:55.754: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154165295s
Dec 29 11:42:57.770: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170280397s
Dec 29 11:42:59.799: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199139456s
Dec 29 11:43:01.902: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302079002s
Dec 29 11:43:03.916: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.315797234s
Dec 29 11:43:05.932: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.332328516s
STEP: Saw pod success
Dec 29 11:43:05.932: INFO: Pod "client-containers-5973014a-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:43:05.938: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5973014a-2a30-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:43:06.539: INFO: Waiting for pod client-containers-5973014a-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:43:06.602: INFO: Pod client-containers-5973014a-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:43:06.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-knqln" for this suite.
Dec 29 11:43:12.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:43:12.730: INFO: namespace: e2e-tests-containers-knqln, resource: bindings, ignored listing per whitelist
Dec 29 11:43:12.861: INFO: namespace e2e-tests-containers-knqln deletion completed in 6.242476892s

• [SLOW TEST:19.449 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:43:12.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 11:43:12.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:13.067: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 29 11:43:13.068: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 29 11:43:13.076: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 29 11:43:13.167: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 29 11:43:13.230: INFO: scanned /root for discovery docs: 
Dec 29 11:43:13.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:41.089: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 29 11:43:41.090: INFO: stdout: "Created e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3\nScaling up e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 29 11:43:41.090: INFO: stdout: "Created e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3\nScaling up e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 29 11:43:41.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:41.235: INFO: stderr: ""
Dec 29 11:43:41.235: INFO: stdout: "e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3-cnlvc "
Dec 29 11:43:41.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3-cnlvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:41.342: INFO: stderr: ""
Dec 29 11:43:41.342: INFO: stdout: "true"
Dec 29 11:43:41.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3-cnlvc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:41.450: INFO: stderr: ""
Dec 29 11:43:41.451: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 29 11:43:41.451: INFO: e2e-test-nginx-rc-72709fe89364dd2106177ec3e8fe09e3-cnlvc is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 29 11:43:41.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7xzvk'
Dec 29 11:43:41.593: INFO: stderr: ""
Dec 29 11:43:41.593: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:43:41.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7xzvk" for this suite.
Dec 29 11:44:05.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:44:05.863: INFO: namespace: e2e-tests-kubectl-7xzvk, resource: bindings, ignored listing per whitelist
Dec 29 11:44:05.921: INFO: namespace e2e-tests-kubectl-7xzvk deletion completed in 24.21100553s

• [SLOW TEST:53.059 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:44:05.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 29 11:44:06.156: INFO: Number of nodes with available pods: 0
Dec 29 11:44:06.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:07.183: INFO: Number of nodes with available pods: 0
Dec 29 11:44:07.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:08.185: INFO: Number of nodes with available pods: 0
Dec 29 11:44:08.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:09.173: INFO: Number of nodes with available pods: 0
Dec 29 11:44:09.174: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:10.178: INFO: Number of nodes with available pods: 0
Dec 29 11:44:10.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:11.236: INFO: Number of nodes with available pods: 0
Dec 29 11:44:11.236: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:12.204: INFO: Number of nodes with available pods: 0
Dec 29 11:44:12.204: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:13.355: INFO: Number of nodes with available pods: 0
Dec 29 11:44:13.355: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:14.189: INFO: Number of nodes with available pods: 0
Dec 29 11:44:14.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:15.256: INFO: Number of nodes with available pods: 1
Dec 29 11:44:15.256: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 29 11:44:15.389: INFO: Number of nodes with available pods: 0
Dec 29 11:44:15.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:16.418: INFO: Number of nodes with available pods: 0
Dec 29 11:44:16.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:17.435: INFO: Number of nodes with available pods: 0
Dec 29 11:44:17.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:18.431: INFO: Number of nodes with available pods: 0
Dec 29 11:44:18.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:19.415: INFO: Number of nodes with available pods: 0
Dec 29 11:44:19.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:20.416: INFO: Number of nodes with available pods: 0
Dec 29 11:44:20.416: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:21.409: INFO: Number of nodes with available pods: 0
Dec 29 11:44:21.409: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:22.417: INFO: Number of nodes with available pods: 0
Dec 29 11:44:22.417: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:23.421: INFO: Number of nodes with available pods: 0
Dec 29 11:44:23.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:24.423: INFO: Number of nodes with available pods: 0
Dec 29 11:44:24.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:25.408: INFO: Number of nodes with available pods: 0
Dec 29 11:44:25.409: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:26.415: INFO: Number of nodes with available pods: 0
Dec 29 11:44:26.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:28.610: INFO: Number of nodes with available pods: 0
Dec 29 11:44:28.610: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:29.768: INFO: Number of nodes with available pods: 0
Dec 29 11:44:29.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:30.412: INFO: Number of nodes with available pods: 0
Dec 29 11:44:30.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 11:44:31.409: INFO: Number of nodes with available pods: 1
Dec 29 11:44:31.409: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mnjcg, will wait for the garbage collector to delete the pods
Dec 29 11:44:31.490: INFO: Deleting DaemonSet.extensions daemon-set took: 20.549704ms
Dec 29 11:44:31.590: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.50221ms
Dec 29 11:44:42.716: INFO: Number of nodes with available pods: 0
Dec 29 11:44:42.716: INFO: Number of running nodes: 0, number of available pods: 0
Dec 29 11:44:42.721: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mnjcg/daemonsets","resourceVersion":"16455703"},"items":null}

Dec 29 11:44:42.724: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mnjcg/pods","resourceVersion":"16455703"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:44:42.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-mnjcg" for this suite.
Dec 29 11:44:50.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:44:50.970: INFO: namespace: e2e-tests-daemonsets-mnjcg, resource: bindings, ignored listing per whitelist
Dec 29 11:44:51.011: INFO: namespace e2e-tests-daemonsets-mnjcg deletion completed in 8.272697417s

• [SLOW TEST:45.090 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:44:51.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:45:53.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-252gk" for this suite.
Dec 29 11:45:59.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:46:00.202: INFO: namespace: e2e-tests-container-runtime-252gk, resource: bindings, ignored listing per whitelist
Dec 29 11:46:00.208: INFO: namespace e2e-tests-container-runtime-252gk deletion completed in 6.518351628s

• [SLOW TEST:69.196 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:46:00.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 29 11:46:00.581: INFO: Waiting up to 5m0s for pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-var-expansion-8zfmg" to be "success or failure"
Dec 29 11:46:00.593: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.636031ms
Dec 29 11:46:02.623: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04187213s
Dec 29 11:46:04.655: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073517319s
Dec 29 11:46:06.831: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249506972s
Dec 29 11:46:08.863: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.281294267s
Dec 29 11:46:10.885: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303412157s
STEP: Saw pod success
Dec 29 11:46:10.885: INFO: Pod "var-expansion-c8d23806-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:46:10.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c8d23806-2a30-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 11:46:11.049: INFO: Waiting for pod var-expansion-c8d23806-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:46:11.069: INFO: Pod var-expansion-c8d23806-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:46:11.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-8zfmg" for this suite.
Dec 29 11:46:17.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:46:17.212: INFO: namespace: e2e-tests-var-expansion-8zfmg, resource: bindings, ignored listing per whitelist
Dec 29 11:46:17.291: INFO: namespace e2e-tests-var-expansion-8zfmg deletion completed in 6.209652013s

• [SLOW TEST:17.082 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:46:17.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d2ff2b77-2a30-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:46:17.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-h9w94" to be "success or failure"
Dec 29 11:46:17.582: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.327966ms
Dec 29 11:46:19.592: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055344788s
Dec 29 11:46:21.613: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075830199s
Dec 29 11:46:23.684: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147003556s
Dec 29 11:46:25.696: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158646019s
Dec 29 11:46:27.708: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171103274s
STEP: Saw pod success
Dec 29 11:46:27.708: INFO: Pod "pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:46:27.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 11:46:28.480: INFO: Waiting for pod pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:46:28.719: INFO: Pod pod-projected-configmaps-d3004bfe-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:46:28.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h9w94" for this suite.
Dec 29 11:46:36.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:46:37.949: INFO: namespace: e2e-tests-projected-h9w94, resource: bindings, ignored listing per whitelist
Dec 29 11:46:37.949: INFO: namespace e2e-tests-projected-h9w94 deletion completed in 9.219140147s

• [SLOW TEST:20.657 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:46:37.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-df566736-2a30-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:46:38.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-wkrnm" to be "success or failure"
Dec 29 11:46:38.247: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.513129ms
Dec 29 11:46:40.470: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242268212s
Dec 29 11:46:42.483: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255357421s
Dec 29 11:46:44.710: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483076526s
Dec 29 11:46:46.741: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.513854522s
Dec 29 11:46:48.789: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.561239227s
STEP: Saw pod success
Dec 29 11:46:48.789: INFO: Pod "pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:46:48.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 11:46:48.950: INFO: Waiting for pod pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:46:48.956: INFO: Pod pod-configmaps-df57bbcf-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:46:48.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wkrnm" for this suite.
Dec 29 11:46:55.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:46:55.256: INFO: namespace: e2e-tests-configmap-wkrnm, resource: bindings, ignored listing per whitelist
Dec 29 11:46:55.363: INFO: namespace e2e-tests-configmap-wkrnm deletion completed in 6.389181476s

• [SLOW TEST:17.414 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:46:55.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e9a4f9e6-2a30-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:46:55.527: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-njclt" to be "success or failure"
Dec 29 11:46:55.609: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 82.215993ms
Dec 29 11:46:57.623: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096176203s
Dec 29 11:46:59.636: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109421529s
Dec 29 11:47:01.655: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12802849s
Dec 29 11:47:03.669: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142256269s
Dec 29 11:47:05.686: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158650289s
Dec 29 11:47:07.709: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.18260044s
STEP: Saw pod success
Dec 29 11:47:07.710: INFO: Pod "pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:47:07.716: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 11:47:08.041: INFO: Waiting for pod pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005 to disappear
Dec 29 11:47:08.052: INFO: Pod pod-configmaps-e9a5c515-2a30-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:47:08.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-njclt" for this suite.
Dec 29 11:47:14.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:47:14.319: INFO: namespace: e2e-tests-configmap-njclt, resource: bindings, ignored listing per whitelist
Dec 29 11:47:14.403: INFO: namespace e2e-tests-configmap-njclt deletion completed in 6.329018905s

• [SLOW TEST:19.040 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:47:14.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f51494c7-2a30-11ea-9252-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-f5149524-2a30-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f51494c7-2a30-11ea-9252-0242ac110005
STEP: Updating configmap cm-test-opt-upd-f5149524-2a30-11ea-9252-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-f514954a-2a30-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:47:31.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l45hh" for this suite.
Dec 29 11:47:57.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:47:57.346: INFO: namespace: e2e-tests-projected-l45hh, resource: bindings, ignored listing per whitelist
Dec 29 11:47:57.486: INFO: namespace e2e-tests-projected-l45hh deletion completed in 26.306604432s

• [SLOW TEST:43.083 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:47:57.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 29 11:47:57.728: INFO: Waiting up to 5m0s for pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-fgj7v" to be "success or failure"
Dec 29 11:47:57.739: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.225179ms
Dec 29 11:47:59.960: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232097999s
Dec 29 11:48:01.992: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264313543s
Dec 29 11:48:04.419: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.691363149s
Dec 29 11:48:06.440: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712579237s
Dec 29 11:48:08.464: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.736628927s
STEP: Saw pod success
Dec 29 11:48:08.465: INFO: Pod "pod-0ebbc3a5-2a31-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:48:08.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0ebbc3a5-2a31-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:48:08.669: INFO: Waiting for pod pod-0ebbc3a5-2a31-11ea-9252-0242ac110005 to disappear
Dec 29 11:48:08.682: INFO: Pod pod-0ebbc3a5-2a31-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:48:08.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fgj7v" for this suite.
Dec 29 11:48:17.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:48:18.203: INFO: namespace: e2e-tests-emptydir-fgj7v, resource: bindings, ignored listing per whitelist
Dec 29 11:48:18.332: INFO: namespace e2e-tests-emptydir-fgj7v deletion completed in 9.62639048s

• [SLOW TEST:20.846 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:48:18.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 29 11:48:18.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:20.660: INFO: stderr: ""
Dec 29 11:48:20.660: INFO: stdout: "pod/pause created\n"
Dec 29 11:48:20.660: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 29 11:48:20.660: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-6x6ls" to be "running and ready"
Dec 29 11:48:20.748: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 87.990224ms
Dec 29 11:48:22.764: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103510327s
Dec 29 11:48:24.788: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127876454s
Dec 29 11:48:26.890: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229279633s
Dec 29 11:48:28.902: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241517002s
Dec 29 11:48:30.920: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.259809468s
Dec 29 11:48:30.920: INFO: Pod "pause" satisfied condition "running and ready"
Dec 29 11:48:30.920: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 29 11:48:30.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.164: INFO: stderr: ""
Dec 29 11:48:31.164: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 29 11:48:31.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.287: INFO: stderr: ""
Dec 29 11:48:31.287: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 29 11:48:31.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.435: INFO: stderr: ""
Dec 29 11:48:31.435: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 29 11:48:31.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.534: INFO: stderr: ""
Dec 29 11:48:31.535: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 29 11:48:31.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.692: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 11:48:31.692: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 29 11:48:31.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-6x6ls'
Dec 29 11:48:31.869: INFO: stderr: "No resources found.\n"
Dec 29 11:48:31.869: INFO: stdout: ""
Dec 29 11:48:31.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-6x6ls -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 29 11:48:32.087: INFO: stderr: ""
Dec 29 11:48:32.087: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:48:32.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6x6ls" for this suite.
Dec 29 11:48:40.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:48:40.235: INFO: namespace: e2e-tests-kubectl-6x6ls, resource: bindings, ignored listing per whitelist
Dec 29 11:48:40.400: INFO: namespace e2e-tests-kubectl-6x6ls deletion completed in 8.290756721s

• [SLOW TEST:22.068 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:48:40.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 11:48:40.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-77b6b" to be "success or failure"
Dec 29 11:48:40.697: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.792428ms
Dec 29 11:48:42.709: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019501765s
Dec 29 11:48:44.747: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057781973s
Dec 29 11:48:46.756: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0665163s
Dec 29 11:48:49.511: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.821961781s
Dec 29 11:48:51.522: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.832769566s
STEP: Saw pod success
Dec 29 11:48:51.522: INFO: Pod "downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:48:51.528: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 11:48:52.406: INFO: Waiting for pod downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005 to disappear
Dec 29 11:48:52.521: INFO: Pod downwardapi-volume-2856f50f-2a31-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:48:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-77b6b" for this suite.
Dec 29 11:48:58.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:48:58.785: INFO: namespace: e2e-tests-projected-77b6b, resource: bindings, ignored listing per whitelist
Dec 29 11:48:58.815: INFO: namespace e2e-tests-projected-77b6b deletion completed in 6.270710283s

• [SLOW TEST:18.415 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:48:58.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 29 11:48:59.051: INFO: Waiting up to 5m0s for pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005" in namespace "e2e-tests-containers-s6xv5" to be "success or failure"
Dec 29 11:48:59.062: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.214586ms
Dec 29 11:49:01.081: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030335649s
Dec 29 11:49:03.089: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0380848s
Dec 29 11:49:05.109: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058165974s
Dec 29 11:49:07.132: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081329204s
Dec 29 11:49:09.157: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105862378s
Dec 29 11:49:11.178: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.126614474s
STEP: Saw pod success
Dec 29 11:49:11.178: INFO: Pod "client-containers-3340e1bb-2a31-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:49:11.183: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3340e1bb-2a31-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:49:11.341: INFO: Waiting for pod client-containers-3340e1bb-2a31-11ea-9252-0242ac110005 to disappear
Dec 29 11:49:11.353: INFO: Pod client-containers-3340e1bb-2a31-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:49:11.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-s6xv5" for this suite.
Dec 29 11:49:17.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:49:17.548: INFO: namespace: e2e-tests-containers-s6xv5, resource: bindings, ignored listing per whitelist
Dec 29 11:49:17.555: INFO: namespace e2e-tests-containers-s6xv5 deletion completed in 6.189186911s

• [SLOW TEST:18.739 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:49:17.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 29 11:49:28.402: INFO: Successfully updated pod "labelsupdate3e6fd239-2a31-11ea-9252-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:49:30.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lsfbt" for this suite.
Dec 29 11:49:54.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:49:54.884: INFO: namespace: e2e-tests-projected-lsfbt, resource: bindings, ignored listing per whitelist
Dec 29 11:49:54.989: INFO: namespace e2e-tests-projected-lsfbt deletion completed in 24.318773396s

• [SLOW TEST:37.434 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:49:54.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:49:55.250: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 29 11:49:55.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gv7w9/daemonsets","resourceVersion":"16456430"},"items":null}

Dec 29 11:49:55.266: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gv7w9/pods","resourceVersion":"16456430"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:49:55.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gv7w9" for this suite.
Dec 29 11:50:01.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:50:01.400: INFO: namespace: e2e-tests-daemonsets-gv7w9, resource: bindings, ignored listing per whitelist
Dec 29 11:50:01.536: INFO: namespace e2e-tests-daemonsets-gv7w9 deletion completed in 6.256611661s

S [SKIPPING] [6.547 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 29 11:49:55.250: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:50:01.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 29 11:50:01.683: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:50:20.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-b9s7d" for this suite.
Dec 29 11:50:26.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:50:26.297: INFO: namespace: e2e-tests-init-container-b9s7d, resource: bindings, ignored listing per whitelist
Dec 29 11:50:26.397: INFO: namespace e2e-tests-init-container-b9s7d deletion completed in 6.172640791s

• [SLOW TEST:24.861 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:50:26.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 11:50:56.687: INFO: Container started at 2019-12-29 11:50:34 +0000 UTC, pod became ready at 2019-12-29 11:50:56 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:50:56.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9zznz" for this suite.
Dec 29 11:51:20.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:51:20.808: INFO: namespace: e2e-tests-container-probe-9zznz, resource: bindings, ignored listing per whitelist
Dec 29 11:51:20.845: INFO: namespace e2e-tests-container-probe-9zznz deletion completed in 24.149751108s

• [SLOW TEST:54.447 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:51:20.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2ckbc
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 29 11:51:21.070: INFO: Found 0 stateful pods, waiting for 3
Dec 29 11:51:31.104: INFO: Found 1 stateful pods, waiting for 3
Dec 29 11:51:41.085: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:51:41.085: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:51:41.085: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 29 11:51:51.083: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:51:51.084: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:51:51.084: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 29 11:51:51.202: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 29 11:52:01.347: INFO: Updating stateful set ss2
Dec 29 11:52:01.369: INFO: Waiting for Pod e2e-tests-statefulset-2ckbc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 11:52:11.400: INFO: Waiting for Pod e2e-tests-statefulset-2ckbc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 29 11:52:22.111: INFO: Found 2 stateful pods, waiting for 3
Dec 29 11:52:32.129: INFO: Found 2 stateful pods, waiting for 3
Dec 29 11:52:42.159: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:52:42.159: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:52:42.159: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 29 11:52:52.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:52:52.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:52:52.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 29 11:52:52.180: INFO: Updating stateful set ss2
Dec 29 11:52:52.215: INFO: Waiting for Pod e2e-tests-statefulset-2ckbc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 11:53:02.239: INFO: Waiting for Pod e2e-tests-statefulset-2ckbc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 11:53:12.267: INFO: Updating stateful set ss2
Dec 29 11:53:12.484: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ckbc/ss2 to complete update
Dec 29 11:53:12.484: INFO: Waiting for Pod e2e-tests-statefulset-2ckbc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 11:53:22.518: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ckbc/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 29 11:53:32.540: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2ckbc
Dec 29 11:53:32.552: INFO: Scaling statefulset ss2 to 0
Dec 29 11:54:12.658: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:54:12.679: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:54:12.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2ckbc" for this suite.
Dec 29 11:54:21.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:54:21.187: INFO: namespace: e2e-tests-statefulset-2ckbc, resource: bindings, ignored listing per whitelist
Dec 29 11:54:21.263: INFO: namespace e2e-tests-statefulset-2ckbc deletion completed in 8.382818201s

• [SLOW TEST:180.418 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:54:21.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:54:54.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-9j5jc" for this suite.
Dec 29 11:55:18.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:55:18.964: INFO: namespace: e2e-tests-replication-controller-9j5jc, resource: bindings, ignored listing per whitelist
Dec 29 11:55:18.980: INFO: namespace e2e-tests-replication-controller-9j5jc deletion completed in 24.267806072s

• [SLOW TEST:57.717 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:55:18.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xb5q7 in namespace e2e-tests-proxy-75ljs
I1229 11:55:19.277674       9 runners.go:184] Created replication controller with name: proxy-service-xb5q7, namespace: e2e-tests-proxy-75ljs, replica count: 1
I1229 11:55:20.328421       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:21.329678       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:22.330217       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:23.331062       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:24.331847       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:25.332496       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:26.332990       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:27.333511       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:28.334164       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1229 11:55:29.335208       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:30.336119       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:31.337414       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:32.339424       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:33.340260       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:34.340840       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:35.341300       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1229 11:55:36.342198       9 runners.go:184] proxy-service-xb5q7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 29 11:55:36.417: INFO: Endpoint e2e-tests-proxy-75ljs/proxy-service-xb5q7 is not ready yet
Dec 29 11:55:38.440: INFO: setup took 19.261602215s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 29 11:55:38.525: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-75ljs/pods/proxy-service-xb5q7-ljr22/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 29 11:55:53.662: INFO: Waiting up to 5m0s for pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-64tvs" to be "success or failure"
Dec 29 11:55:53.696: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.547769ms
Dec 29 11:55:55.710: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047969932s
Dec 29 11:55:57.731: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068235799s
Dec 29 11:55:59.875: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212696364s
Dec 29 11:56:01.893: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23018349s
Dec 29 11:56:03.949: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286467519s
STEP: Saw pod success
Dec 29 11:56:03.949: INFO: Pod "pod-2a65ec8c-2a32-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:56:03.958: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2a65ec8c-2a32-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 11:56:04.756: INFO: Waiting for pod pod-2a65ec8c-2a32-11ea-9252-0242ac110005 to disappear
Dec 29 11:56:04.776: INFO: Pod pod-2a65ec8c-2a32-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:56:04.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-64tvs" for this suite.
Dec 29 11:56:10.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:56:11.036: INFO: namespace: e2e-tests-emptydir-64tvs, resource: bindings, ignored listing per whitelist
Dec 29 11:56:11.116: INFO: namespace e2e-tests-emptydir-64tvs deletion completed in 6.324357086s

• [SLOW TEST:17.711 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:56:11.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 29 11:56:11.487: INFO: Waiting up to 5m0s for pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-c74p9" to be "success or failure"
Dec 29 11:56:11.500: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.149948ms
Dec 29 11:56:13.569: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082535312s
Dec 29 11:56:15.587: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099924016s
Dec 29 11:56:17.924: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437246452s
Dec 29 11:56:20.006: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519050378s
Dec 29 11:56:22.021: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.533684317s
Dec 29 11:56:24.330: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.842765895s
STEP: Saw pod success
Dec 29 11:56:24.330: INFO: Pod "downward-api-350810d4-2a32-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:56:24.342: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-350810d4-2a32-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 11:56:25.033: INFO: Waiting for pod downward-api-350810d4-2a32-11ea-9252-0242ac110005 to disappear
Dec 29 11:56:25.047: INFO: Pod downward-api-350810d4-2a32-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:56:25.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c74p9" for this suite.
Dec 29 11:56:31.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:56:31.212: INFO: namespace: e2e-tests-downward-api-c74p9, resource: bindings, ignored listing per whitelist
Dec 29 11:56:31.285: INFO: namespace e2e-tests-downward-api-c74p9 deletion completed in 6.225045039s

• [SLOW TEST:20.167 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:56:31.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-40f76c59-2a32-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 11:56:31.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-pwcc6" to be "success or failure"
Dec 29 11:56:31.544: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.601821ms
Dec 29 11:56:33.561: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042010137s
Dec 29 11:56:35.591: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071680548s
Dec 29 11:56:37.622: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102741399s
Dec 29 11:56:40.212: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692792261s
Dec 29 11:56:42.276: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.757419074s
STEP: Saw pod success
Dec 29 11:56:42.277: INFO: Pod "pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:56:42.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 11:56:43.119: INFO: Waiting for pod pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005 to disappear
Dec 29 11:56:43.145: INFO: Pod pod-configmaps-40f82f28-2a32-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:56:43.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pwcc6" for this suite.
Dec 29 11:56:49.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:56:49.866: INFO: namespace: e2e-tests-configmap-pwcc6, resource: bindings, ignored listing per whitelist
Dec 29 11:56:49.922: INFO: namespace e2e-tests-configmap-pwcc6 deletion completed in 6.670704308s

• [SLOW TEST:18.637 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:56:49.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 11:56:50.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-hzq7b" to be "success or failure"
Dec 29 11:56:50.269: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.922405ms
Dec 29 11:56:52.288: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061944841s
Dec 29 11:56:54.302: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075786348s
Dec 29 11:56:56.315: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08893794s
Dec 29 11:56:58.755: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529658685s
Dec 29 11:57:00.992: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.766323535s
STEP: Saw pod success
Dec 29 11:57:00.992: INFO: Pod "downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 11:57:01.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 11:57:01.087: INFO: Waiting for pod downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005 to disappear
Dec 29 11:57:01.155: INFO: Pod downwardapi-volume-4c05f222-2a32-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:57:01.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hzq7b" for this suite.
Dec 29 11:57:07.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:57:07.261: INFO: namespace: e2e-tests-downward-api-hzq7b, resource: bindings, ignored listing per whitelist
Dec 29 11:57:07.395: INFO: namespace e2e-tests-downward-api-hzq7b deletion completed in 6.227721206s

• [SLOW TEST:17.473 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:57:07.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 29 11:57:07.695: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 29 11:57:12.719: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:57:14.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-blw7z" for this suite.
Dec 29 11:57:29.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:57:29.670: INFO: namespace: e2e-tests-replication-controller-blw7z, resource: bindings, ignored listing per whitelist
Dec 29 11:57:29.809: INFO: namespace e2e-tests-replication-controller-blw7z deletion completed in 15.536374084s

• [SLOW TEST:22.413 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:57:29.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 11:57:30.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-sz7p8" for this suite.
Dec 29 11:57:36.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 11:57:36.250: INFO: namespace: e2e-tests-services-sz7p8, resource: bindings, ignored listing per whitelist
Dec 29 11:57:36.263: INFO: namespace e2e-tests-services-sz7p8 deletion completed in 6.220127588s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.453 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 11:57:36.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-p9fsp
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-p9fsp
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-p9fsp
Dec 29 11:57:36.594: INFO: Found 0 stateful pods, waiting for 1
Dec 29 11:57:46.624: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 29 11:57:46.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:57:47.278: INFO: stderr: ""
Dec 29 11:57:47.278: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:57:47.278: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:57:47.301: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 29 11:57:57.334: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:57:57.334: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:57:57.380: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:57:57.380: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:57:57.381: INFO: 
Dec 29 11:57:57.381: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 29 11:57:58.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973814542s
Dec 29 11:57:59.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.954522519s
Dec 29 11:58:00.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.678182444s
Dec 29 11:58:01.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.661879048s
Dec 29 11:58:02.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.648080971s
Dec 29 11:58:04.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.596439276s
Dec 29 11:58:05.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.130701024s
Dec 29 11:58:06.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 663.891278ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-p9fsp
Dec 29 11:58:07.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:08.907: INFO: stderr: ""
Dec 29 11:58:08.908: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:58:08.908: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:58:08.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:09.177: INFO: rc: 1
Dec 29 11:58:09.178: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0024da180 exit status 1   true [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc010 0xc000cfc028] [0x935700 0x935700] 0xc00269c1e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 29 11:58:19.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:19.676: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 29 11:58:19.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:58:19.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:58:19.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:20.130: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 29 11:58:20.130: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 11:58:20.130: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 11:58:20.148: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:58:20.148: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 11:58:20.148: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 29 11:58:20.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:58:20.731: INFO: stderr: ""
Dec 29 11:58:20.732: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:58:20.732: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:58:20.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:58:21.288: INFO: stderr: ""
Dec 29 11:58:21.288: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:58:21.288: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:58:21.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 11:58:21.653: INFO: stderr: ""
Dec 29 11:58:21.653: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 11:58:21.653: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 11:58:21.653: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 11:58:21.675: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 29 11:58:31.730: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:58:31.730: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:58:31.730: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 29 11:58:31.845: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:31.845: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:31.845: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:31.845: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:31.845: INFO: 
Dec 29 11:58:31.845: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:35.146: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:35.146: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:35.146: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:35.146: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:35.146: INFO: 
Dec 29 11:58:35.146: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:36.325: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:36.325: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:36.325: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:36.325: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:36.325: INFO: 
Dec 29 11:58:36.325: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:37.348: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:37.349: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:37.349: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:37.349: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:37.349: INFO: 
Dec 29 11:58:37.349: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:38.881: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:38.881: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:38.882: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:38.882: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:38.882: INFO: 
Dec 29 11:58:38.882: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:39.910: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:39.910: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:39.910: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:39.910: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:39.910: INFO: 
Dec 29 11:58:39.910: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 29 11:58:41.277: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 29 11:58:41.277: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:36 +0000 UTC  }]
Dec 29 11:58:41.277: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:41.277: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 11:57:57 +0000 UTC  }]
Dec 29 11:58:41.277: INFO: 
Dec 29 11:58:41.277: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-p9fsp
Dec 29 11:58:42.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:42.622: INFO: rc: 1
Dec 29 11:58:42.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001b8df50 exit status 1   true [0xc0001174c8 0xc0001175a8 0xc000117628] [0xc0001174c8 0xc0001175a8 0xc000117628] [0xc000117548 0xc000117608] [0x935700 0x935700] 0xc001fd5020 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 29 11:58:52.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:58:52.860: INFO: rc: 1
Dec 29 11:58:52.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00162f380 exit status 1   true [0xc00132a068 0xc00132a080 0xc00132a098] [0xc00132a068 0xc00132a080 0xc00132a098] [0xc00132a078 0xc00132a090] [0x935700 0x935700] 0xc00253f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:02.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:02.957: INFO: rc: 1
Dec 29 11:59:02.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013dc4e0 exit status 1   true [0xc001ad22a0 0xc001ad22b8 0xc001ad22d0] [0xc001ad22a0 0xc001ad22b8 0xc001ad22d0] [0xc001ad22b0 0xc001ad22c8] [0x935700 0x935700] 0xc001914a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:12.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:13.079: INFO: rc: 1
Dec 29 11:59:13.080: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024da480 exit status 1   true [0xc000cfc0c8 0xc000cfc100 0xc000cfc138] [0xc000cfc0c8 0xc000cfc100 0xc000cfc138] [0xc000cfc0f0 0xc000cfc120] [0x935700 0x935700] 0xc00269c540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:23.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:23.221: INFO: rc: 1
Dec 29 11:59:23.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013d8180 exit status 1   true [0xc000117650 0xc0001176f8 0xc000117740] [0xc000117650 0xc0001176f8 0xc000117740] [0xc0001176e0 0xc000117730] [0x935700 0x935700] 0xc001fd53e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:33.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:33.400: INFO: rc: 1
Dec 29 11:59:33.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024da5a0 exit status 1   true [0xc000cfc150 0xc000cfc188 0xc000cfc1d8] [0xc000cfc150 0xc000cfc188 0xc000cfc1d8] [0xc000cfc178 0xc000cfc1c0] [0x935700 0x935700] 0xc00269c9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:43.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:43.526: INFO: rc: 1
Dec 29 11:59:43.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001806210 exit status 1   true [0xc00000ec60 0xc00000eea0 0xc00000f058] [0xc00000ec60 0xc00000eea0 0xc00000f058] [0xc00000ee20 0xc00000efb8] [0x935700 0x935700] 0xc0019e81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 11:59:53.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 11:59:53.682: INFO: rc: 1
Dec 29 11:59:53.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001806330 exit status 1   true [0xc00000f180 0xc00000f558 0xc00000f808] [0xc00000f180 0xc00000f558 0xc00000f808] [0xc00000f3f8 0xc00000f6c0] [0x935700 0x935700] 0xc0019e8840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:03.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:03.802: INFO: rc: 1
Dec 29 12:00:03.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00269e120 exit status 1   true [0xc00016e000 0xc0001160b0 0xc0001160e8] [0xc00016e000 0xc0001160b0 0xc0001160e8] [0xc0001160a8 0xc0001160d8] [0x935700 0x935700] 0xc00291e240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:13.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:13.905: INFO: rc: 1
Dec 29 12:00:13.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c150 exit status 1   true [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc010 0xc000cfc028] [0x935700 0x935700] 0xc001fd4360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:23.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:24.049: INFO: rc: 1
Dec 29 12:00:24.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00269e2a0 exit status 1   true [0xc000116170 0xc000116190 0xc0001161c0] [0xc000116170 0xc000116190 0xc0001161c0] [0xc000116188 0xc0001161b8] [0x935700 0x935700] 0xc00291ee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:34.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:34.161: INFO: rc: 1
Dec 29 12:00:34.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00269e570 exit status 1   true [0xc000116208 0xc0001162d0 0xc000116318] [0xc000116208 0xc0001162d0 0xc000116318] [0xc000116290 0xc000116310] [0x935700 0x935700] 0xc00291f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:44.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:44.269: INFO: rc: 1
Dec 29 12:00:44.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00283e120 exit status 1   true [0xc001ad2000 0xc001ad2018 0xc001ad2030] [0xc001ad2000 0xc001ad2018 0xc001ad2030] [0xc001ad2010 0xc001ad2028] [0x935700 0x935700] 0xc00269c1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:00:54.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:00:54.409: INFO: rc: 1
Dec 29 12:00:54.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018064b0 exit status 1   true [0xc00000f978 0xc00000fcd8 0xc00000fd80] [0xc00000f978 0xc00000fcd8 0xc00000fd80] [0xc00000fb08 0xc00000fd78] [0x935700 0x935700] 0xc0019e8fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:04.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:04.552: INFO: rc: 1
Dec 29 12:01:04.553: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c2d0 exit status 1   true [0xc000cfc038 0xc000cfc050 0xc000cfc070] [0xc000cfc038 0xc000cfc050 0xc000cfc070] [0xc000cfc048 0xc000cfc060] [0x935700 0x935700] 0xc001fd4660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:14.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:14.659: INFO: rc: 1
Dec 29 12:01:14.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c420 exit status 1   true [0xc000cfc088 0xc000cfc0c8 0xc000cfc100] [0xc000cfc088 0xc000cfc0c8 0xc000cfc100] [0xc000cfc0b0 0xc000cfc0f0] [0x935700 0x935700] 0xc001fd4a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:24.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:24.751: INFO: rc: 1
Dec 29 12:01:24.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00269e7e0 exit status 1   true [0xc000116338 0xc000116370 0xc0001163a8] [0xc000116338 0xc000116370 0xc0001163a8] [0xc000116360 0xc0001163a0] [0x935700 0x935700] 0xc00291f920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:34.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:34.859: INFO: rc: 1
Dec 29 12:01:34.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c5a0 exit status 1   true [0xc000cfc110 0xc000cfc150 0xc000cfc188] [0xc000cfc110 0xc000cfc150 0xc000cfc188] [0xc000cfc138 0xc000cfc178] [0x935700 0x935700] 0xc001fd4fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:44.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:45.000: INFO: rc: 1
Dec 29 12:01:45.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c180 exit status 1   true [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc000 0xc000cfc018 0xc000cfc030] [0xc000cfc010 0xc000cfc028] [0x935700 0x935700] 0xc001fd4360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:01:55.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:01:55.116: INFO: rc: 1
Dec 29 12:01:55.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c300 exit status 1   true [0xc000cfc038 0xc000cfc050 0xc000cfc070] [0xc000cfc038 0xc000cfc050 0xc000cfc070] [0xc000cfc048 0xc000cfc060] [0x935700 0x935700] 0xc001fd4660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:05.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:05.240: INFO: rc: 1
Dec 29 12:02:05.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001806240 exit status 1   true [0xc00000e1f8 0xc00000ee20 0xc00000efb8] [0xc00000e1f8 0xc00000ee20 0xc00000efb8] [0xc00000ec90 0xc00000eee8] [0x935700 0x935700] 0xc0019e81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:15.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:15.400: INFO: rc: 1
Dec 29 12:02:15.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00283e180 exit status 1   true [0xc000116098 0xc0001160b8 0xc000116170] [0xc000116098 0xc0001160b8 0xc000116170] [0xc0001160b0 0xc0001160e8] [0x935700 0x935700] 0xc00291e240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:25.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:25.565: INFO: rc: 1
Dec 29 12:02:25.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00283e2a0 exit status 1   true [0xc000116180 0xc0001161a0 0xc000116208] [0xc000116180 0xc0001161a0 0xc000116208] [0xc000116190 0xc0001161c0] [0x935700 0x935700] 0xc00291ee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:35.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:35.700: INFO: rc: 1
Dec 29 12:02:35.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00269e150 exit status 1   true [0xc001ad2000 0xc001ad2018 0xc001ad2030] [0xc001ad2000 0xc001ad2018 0xc001ad2030] [0xc001ad2010 0xc001ad2028] [0x935700 0x935700] 0xc00269c1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:45.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:45.894: INFO: rc: 1
Dec 29 12:02:45.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00283e600 exit status 1   true [0xc000116270 0xc0001162f0 0xc000116338] [0xc000116270 0xc0001162f0 0xc000116338] [0xc0001162d0 0xc000116318] [0x935700 0x935700] 0xc00291f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:02:55.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:02:56.056: INFO: rc: 1
Dec 29 12:02:56.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018063f0 exit status 1   true [0xc00000f058 0xc00000f3f8 0xc00000f6c0] [0xc00000f058 0xc00000f3f8 0xc00000f6c0] [0xc00000f1f8 0xc00000f618] [0x935700 0x935700] 0xc0019e8840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:03:06.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:03:06.212: INFO: rc: 1
Dec 29 12:03:06.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c450 exit status 1   true [0xc000cfc088 0xc000cfc0c8 0xc000cfc100] [0xc000cfc088 0xc000cfc0c8 0xc000cfc100] [0xc000cfc0b0 0xc000cfc0f0] [0x935700 0x935700] 0xc001fd4a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:03:16.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:03:16.358: INFO: rc: 1
Dec 29 12:03:16.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c570 exit status 1   true [0xc000cfc110 0xc000cfc150 0xc000cfc188] [0xc000cfc110 0xc000cfc150 0xc000cfc188] [0xc000cfc138 0xc000cfc178] [0x935700 0x935700] 0xc001fd4fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:03:26.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:03:26.522: INFO: rc: 1
Dec 29 12:03:26.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b8c6f0 exit status 1   true [0xc000cfc1b0 0xc000cfc1e8 0xc000cfc238] [0xc000cfc1b0 0xc000cfc1e8 0xc000cfc238] [0xc000cfc1d8 0xc000cfc228] [0x935700 0x935700] 0xc001fd5380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:03:36.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:03:36.672: INFO: rc: 1
Dec 29 12:03:36.673: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00283e780 exit status 1   true [0xc000116350 0xc000116380 0xc0001163c0] [0xc000116350 0xc000116380 0xc0001163c0] [0xc000116370 0xc0001163a8] [0x935700 0x935700] 0xc00291f920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 29 12:03:46.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p9fsp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:03:46.799: INFO: rc: 1
Dec 29 12:03:46.799: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 29 12:03:46.799: INFO: Scaling statefulset ss to 0
Dec 29 12:03:46.824: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 29 12:03:46.827: INFO: Deleting all statefulset in ns e2e-tests-statefulset-p9fsp
Dec 29 12:03:46.831: INFO: Scaling statefulset ss to 0
Dec 29 12:03:46.842: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 12:03:46.845: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:03:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-p9fsp" for this suite.
Dec 29 12:03:55.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:03:55.158: INFO: namespace: e2e-tests-statefulset-p9fsp, resource: bindings, ignored listing per whitelist
Dec 29 12:03:55.223: INFO: namespace e2e-tests-statefulset-p9fsp deletion completed in 8.228583728s

• [SLOW TEST:378.960 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:03:55.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:03:55.678: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 29 12:04:00.785: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 29 12:04:06.804: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 29 12:04:06.882: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-wrkqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wrkqp/deployments/test-cleanup-deployment,UID:505dee9f-2a33-11ea-a994-fa163e34d433,ResourceVersion:16458167,Generation:1,CreationTimestamp:2019-12-29 12:04:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 29 12:04:06.888: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-wrkqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wrkqp/replicasets/test-cleanup-deployment-6df768c57,UID:506718db-2a33-11ea-a994-fa163e34d433,ResourceVersion:16458169,Generation:1,CreationTimestamp:2019-12-29 12:04:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 505dee9f-2a33-11ea-a994-fa163e34d433 0xc0022b6db0 0xc0022b6db1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 12:04:06.888: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 29 12:04:06.889: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-wrkqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wrkqp/replicasets/test-cleanup-controller,UID:49b642f4-2a33-11ea-a994-fa163e34d433,ResourceVersion:16458168,Generation:1,CreationTimestamp:2019-12-29 12:03:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 505dee9f-2a33-11ea-a994-fa163e34d433 0xc0022b6cdf 0xc0022b6cf0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 29 12:04:06.899: INFO: Pod "test-cleanup-controller-vpzgs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vpzgs,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-wrkqp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wrkqp/pods/test-cleanup-controller-vpzgs,UID:49bc0b3e-2a33-11ea-a994-fa163e34d433,ResourceVersion:16458163,Generation:0,CreationTimestamp:2019-12-29 12:03:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 49b642f4-2a33-11ea-a994-fa163e34d433 0xc0022b7677 0xc0022b7678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kr26x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kr26x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-kr26x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b76e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b7700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:03:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:04:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:04:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:03:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-29 12:03:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:04:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2a339a8f4f55876ac34722859619e253b88d332359124d9a025d41aac1e02688}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:04:06.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wrkqp" for this suite.
Dec 29 12:04:15.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:04:15.394: INFO: namespace: e2e-tests-deployment-wrkqp, resource: bindings, ignored listing per whitelist
Dec 29 12:04:15.517: INFO: namespace e2e-tests-deployment-wrkqp deletion completed in 8.527213584s

• [SLOW TEST:20.293 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:04:15.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 29 12:04:15.846: INFO: Waiting up to 5m0s for pod "pod-55bafca4-2a33-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-cnwpk" to be "success or failure"
Dec 29 12:04:16.040: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 194.417274ms
Dec 29 12:04:18.079: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232777152s
Dec 29 12:04:20.116: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270416654s
Dec 29 12:04:22.520: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673476886s
Dec 29 12:04:24.560: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713581191s
Dec 29 12:04:26.602: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755695527s
STEP: Saw pod success
Dec 29 12:04:26.602: INFO: Pod "pod-55bafca4-2a33-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:04:26.613: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-55bafca4-2a33-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:04:26.915: INFO: Waiting for pod pod-55bafca4-2a33-11ea-9252-0242ac110005 to disappear
Dec 29 12:04:27.003: INFO: Pod pod-55bafca4-2a33-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:04:27.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cnwpk" for this suite.
Dec 29 12:04:33.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:04:33.311: INFO: namespace: e2e-tests-emptydir-cnwpk, resource: bindings, ignored listing per whitelist
Dec 29 12:04:33.440: INFO: namespace e2e-tests-emptydir-cnwpk deletion completed in 6.422133196s

• [SLOW TEST:17.923 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:04:33.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-605f1b0e-2a33-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:04:33.703: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-2vbrm" to be "success or failure"
Dec 29 12:04:33.720: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.775058ms
Dec 29 12:04:36.123: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419807952s
Dec 29 12:04:38.139: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436253539s
Dec 29 12:04:40.267: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.564497654s
Dec 29 12:04:42.292: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589443158s
Dec 29 12:04:44.428: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.724986362s
STEP: Saw pod success
Dec 29 12:04:44.428: INFO: Pod "pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:04:44.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 29 12:04:44.944: INFO: Waiting for pod pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005 to disappear
Dec 29 12:04:44.969: INFO: Pod pod-projected-secrets-6060266f-2a33-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:04:44.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2vbrm" for this suite.
Dec 29 12:04:51.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:04:51.080: INFO: namespace: e2e-tests-projected-2vbrm, resource: bindings, ignored listing per whitelist
Dec 29 12:04:51.223: INFO: namespace e2e-tests-projected-2vbrm deletion completed in 6.241342338s

• [SLOW TEST:17.782 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:04:51.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:04:51.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-t7lj7" to be "success or failure"
Dec 29 12:04:51.510: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.97378ms
Dec 29 12:04:53.539: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06979772s
Dec 29 12:04:55.568: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098380659s
Dec 29 12:04:57.586: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116869403s
Dec 29 12:04:59.600: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131173159s
Dec 29 12:05:01.612: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142514439s
STEP: Saw pod success
Dec 29 12:05:01.612: INFO: Pod "downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:05:01.618: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:05:02.688: INFO: Waiting for pod downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005 to disappear
Dec 29 12:05:02.927: INFO: Pod downwardapi-volume-6aee5e37-2a33-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:05:02.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7lj7" for this suite.
Dec 29 12:05:09.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:05:09.337: INFO: namespace: e2e-tests-projected-t7lj7, resource: bindings, ignored listing per whitelist
Dec 29 12:05:09.339: INFO: namespace e2e-tests-projected-t7lj7 deletion completed in 6.395560768s

• [SLOW TEST:18.115 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:05:09.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-djv8d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-djv8d to expose endpoints map[]
Dec 29 12:05:09.780: INFO: Get endpoints failed (68.191471ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 29 12:05:10.793: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-djv8d exposes endpoints map[] (1.080613412s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-djv8d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-djv8d to expose endpoints map[pod1:[100]]
Dec 29 12:05:15.078: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.266550669s elapsed, will retry)
Dec 29 12:05:20.525: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-djv8d exposes endpoints map[pod1:[100]] (9.713015264s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-djv8d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-djv8d to expose endpoints map[pod1:[100] pod2:[101]]
Dec 29 12:05:24.892: INFO: Unexpected endpoints: found map[7681ae72-2a33-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (4.327958062s elapsed, will retry)
Dec 29 12:05:29.032: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-djv8d exposes endpoints map[pod1:[100] pod2:[101]] (8.468036236s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-djv8d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-djv8d to expose endpoints map[pod2:[101]]
Dec 29 12:05:30.228: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-djv8d exposes endpoints map[pod2:[101]] (1.179846992s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-djv8d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-djv8d to expose endpoints map[]
Dec 29 12:05:31.544: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-djv8d exposes endpoints map[] (1.304010099s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:05:32.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-djv8d" for this suite.
Dec 29 12:05:55.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:05:55.404: INFO: namespace: e2e-tests-services-djv8d, resource: bindings, ignored listing per whitelist
Dec 29 12:05:55.520: INFO: namespace e2e-tests-services-djv8d deletion completed in 22.922000984s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.181 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:05:55.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-7v69
STEP: Creating a pod to test atomic-volume-subpath
Dec 29 12:05:55.872: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7v69" in namespace "e2e-tests-subpath-nx7tq" to be "success or failure"
Dec 29 12:05:55.906: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 33.577749ms
Dec 29 12:05:57.988: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11527288s
Dec 29 12:06:00.026: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153598458s
Dec 29 12:06:02.069: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196332156s
Dec 29 12:06:04.732: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860137061s
Dec 29 12:06:06.756: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883933485s
Dec 29 12:06:08.770: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Pending", Reason="", readiness=false. Elapsed: 12.897585792s
Dec 29 12:06:10.780: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 14.907719079s
Dec 29 12:06:12.797: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 16.924625695s
Dec 29 12:06:14.809: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 18.936402004s
Dec 29 12:06:16.844: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 20.971268782s
Dec 29 12:06:18.867: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 22.994269322s
Dec 29 12:06:20.887: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 25.014949324s
Dec 29 12:06:22.907: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 27.034691338s
Dec 29 12:06:24.917: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 29.044885503s
Dec 29 12:06:26.935: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 31.062753924s
Dec 29 12:06:28.947: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Running", Reason="", readiness=false. Elapsed: 33.07508659s
Dec 29 12:06:31.681: INFO: Pod "pod-subpath-test-configmap-7v69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.808760942s
STEP: Saw pod success
Dec 29 12:06:31.681: INFO: Pod "pod-subpath-test-configmap-7v69" satisfied condition "success or failure"
Dec 29 12:06:31.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7v69 container test-container-subpath-configmap-7v69: 
STEP: delete the pod
Dec 29 12:06:31.889: INFO: Waiting for pod pod-subpath-test-configmap-7v69 to disappear
Dec 29 12:06:31.920: INFO: Pod pod-subpath-test-configmap-7v69 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7v69
Dec 29 12:06:31.920: INFO: Deleting pod "pod-subpath-test-configmap-7v69" in namespace "e2e-tests-subpath-nx7tq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:06:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-nx7tq" for this suite.
Dec 29 12:06:40.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:06:40.122: INFO: namespace: e2e-tests-subpath-nx7tq, resource: bindings, ignored listing per whitelist
Dec 29 12:06:40.129: INFO: namespace e2e-tests-subpath-nx7tq deletion completed in 8.139887879s

• [SLOW TEST:44.609 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:06:40.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 29 12:07:04.694: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:04.694: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:05.330: INFO: Exec stderr: ""
Dec 29 12:07:05.331: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:05.331: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:05.774: INFO: Exec stderr: ""
Dec 29 12:07:05.775: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:05.775: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:06.069: INFO: Exec stderr: ""
Dec 29 12:07:06.069: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:06.069: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:06.445: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 29 12:07:06.445: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:06.445: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:06.777: INFO: Exec stderr: ""
Dec 29 12:07:06.777: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:06.778: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:07.083: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 29 12:07:07.083: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:07.083: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:07.384: INFO: Exec stderr: ""
Dec 29 12:07:07.385: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:07.385: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:07.755: INFO: Exec stderr: ""
Dec 29 12:07:07.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:07.756: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:08.073: INFO: Exec stderr: ""
Dec 29 12:07:08.074: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ppxcb PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:07:08.074: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:07:08.466: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:07:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-ppxcb" for this suite.
Dec 29 12:08:08.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:08:08.747: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-ppxcb, resource: bindings, ignored listing per whitelist
Dec 29 12:08:08.767: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-ppxcb deletion completed in 1m0.27547412s

• [SLOW TEST:88.637 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:08:08.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e0ac1cc9-2a33-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 12:08:08.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-rd9cq" to be "success or failure"
Dec 29 12:08:09.049: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.77866ms
Dec 29 12:08:11.064: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088574028s
Dec 29 12:08:13.076: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10035544s
Dec 29 12:08:15.108: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132515427s
Dec 29 12:08:17.202: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226045135s
Dec 29 12:08:19.222: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246652845s
Dec 29 12:08:21.365: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.38959855s
STEP: Saw pod success
Dec 29 12:08:21.365: INFO: Pod "pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:08:21.374: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 12:08:21.449: INFO: Waiting for pod pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005 to disappear
Dec 29 12:08:21.503: INFO: Pod pod-configmaps-e0af2966-2a33-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:08:21.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rd9cq" for this suite.
Dec 29 12:08:27.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:08:27.772: INFO: namespace: e2e-tests-configmap-rd9cq, resource: bindings, ignored listing per whitelist
Dec 29 12:08:27.854: INFO: namespace e2e-tests-configmap-rd9cq deletion completed in 6.342544074s

• [SLOW TEST:19.087 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:08:27.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 29 12:08:28.150: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 29 12:08:28.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:32.285: INFO: stderr: ""
Dec 29 12:08:32.285: INFO: stdout: "service/redis-slave created\n"
Dec 29 12:08:32.286: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 29 12:08:32.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:32.690: INFO: stderr: ""
Dec 29 12:08:32.691: INFO: stdout: "service/redis-master created\n"
Dec 29 12:08:32.691: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 29 12:08:32.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:33.106: INFO: stderr: ""
Dec 29 12:08:33.106: INFO: stdout: "service/frontend created\n"
Dec 29 12:08:33.108: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 29 12:08:33.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:33.456: INFO: stderr: ""
Dec 29 12:08:33.456: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 29 12:08:33.458: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 29 12:08:33.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:33.882: INFO: stderr: ""
Dec 29 12:08:33.882: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 29 12:08:33.884: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 29 12:08:33.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:08:34.408: INFO: stderr: ""
Dec 29 12:08:34.409: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 29 12:08:34.409: INFO: Waiting for all frontend pods to be Running.
Dec 29 12:09:04.464: INFO: Waiting for frontend to serve content.
Dec 29 12:09:05.519: INFO: Trying to add a new entry to the guestbook.
Dec 29 12:09:05.642: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 29 12:09:05.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:06.131: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:06.131: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 29 12:09:06.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:06.556: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:06.556: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 29 12:09:06.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:06.729: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:06.729: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 29 12:09:06.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:06.844: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:06.844: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 29 12:09:06.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:07.002: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:07.002: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 29 12:09:07.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qlktm'
Dec 29 12:09:07.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:09:07.380: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:09:07.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qlktm" for this suite.
Dec 29 12:09:51.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:09:51.795: INFO: namespace: e2e-tests-kubectl-qlktm, resource: bindings, ignored listing per whitelist
Dec 29 12:09:51.829: INFO: namespace e2e-tests-kubectl-qlktm deletion completed in 44.408392655s

• [SLOW TEST:83.974 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:09:51.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1e24e21a-2a34-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 12:09:52.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-dkjwx" to be "success or failure"
Dec 29 12:09:52.141: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.790077ms
Dec 29 12:09:54.161: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042829784s
Dec 29 12:09:56.175: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055976728s
Dec 29 12:09:58.322: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203092615s
Dec 29 12:10:00.339: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220806142s
Dec 29 12:10:02.360: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.241342158s
STEP: Saw pod success
Dec 29 12:10:02.360: INFO: Pod "pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:10:02.364: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 12:10:03.708: INFO: Waiting for pod pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:10:03.721: INFO: Pod pod-configmaps-1e25f5f8-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:10:03.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dkjwx" for this suite.
Dec 29 12:10:09.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:10:09.999: INFO: namespace: e2e-tests-configmap-dkjwx, resource: bindings, ignored listing per whitelist
Dec 29 12:10:10.026: INFO: namespace e2e-tests-configmap-dkjwx deletion completed in 6.290889294s

• [SLOW TEST:18.198 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:10:10.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 12:10:10.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-msvwn'
Dec 29 12:10:10.378: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 29 12:10:10.378: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 29 12:10:14.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-msvwn'
Dec 29 12:10:14.780: INFO: stderr: ""
Dec 29 12:10:14.781: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:10:14.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-msvwn" for this suite.
Dec 29 12:10:20.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:10:21.067: INFO: namespace: e2e-tests-kubectl-msvwn, resource: bindings, ignored listing per whitelist
Dec 29 12:10:21.150: INFO: namespace e2e-tests-kubectl-msvwn deletion completed in 6.30985564s

• [SLOW TEST:11.123 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:10:21.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-4d6zh/configmap-test-2f9d29ad-2a34-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 12:10:21.388: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-4d6zh" to be "success or failure"
Dec 29 12:10:21.456: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.782958ms
Dec 29 12:10:23.815: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426651955s
Dec 29 12:10:25.851: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462417039s
Dec 29 12:10:28.854: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.465686657s
Dec 29 12:10:30.880: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.491751925s
Dec 29 12:10:32.899: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.510220678s
STEP: Saw pod success
Dec 29 12:10:32.899: INFO: Pod "pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:10:32.903: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005 container env-test: 
STEP: delete the pod
Dec 29 12:10:33.017: INFO: Waiting for pod pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:10:34.026: INFO: Pod pod-configmaps-2f9e163c-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:10:34.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4d6zh" for this suite.
Dec 29 12:10:40.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:10:40.359: INFO: namespace: e2e-tests-configmap-4d6zh, resource: bindings, ignored listing per whitelist
Dec 29 12:10:40.660: INFO: namespace e2e-tests-configmap-4d6zh deletion completed in 6.424834077s

• [SLOW TEST:19.510 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:10:40.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:10:41.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-j87vl" to be "success or failure"
Dec 29 12:10:41.028: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324464ms
Dec 29 12:10:43.313: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290762224s
Dec 29 12:10:45.328: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306112005s
Dec 29 12:10:47.416: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394423347s
Dec 29 12:10:49.438: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416178785s
Dec 29 12:10:51.920: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.898696587s
STEP: Saw pod success
Dec 29 12:10:51.921: INFO: Pod "downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:10:51.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:10:52.218: INFO: Waiting for pod downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:10:52.228: INFO: Pod downwardapi-volume-3b51c0bc-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:10:52.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j87vl" for this suite.
Dec 29 12:10:58.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:10:58.380: INFO: namespace: e2e-tests-downward-api-j87vl, resource: bindings, ignored listing per whitelist
Dec 29 12:10:58.417: INFO: namespace e2e-tests-downward-api-j87vl deletion completed in 6.182197162s

• [SLOW TEST:17.756 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:10:58.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 29 12:10:58.752: INFO: Waiting up to 5m0s for pod "pod-45e290e5-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-78d8q" to be "success or failure"
Dec 29 12:10:58.762: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.759365ms
Dec 29 12:11:00.775: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023124593s
Dec 29 12:11:02.787: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035750301s
Dec 29 12:11:04.978: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226756509s
Dec 29 12:11:07.055: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303736007s
Dec 29 12:11:09.094: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34254361s
STEP: Saw pod success
Dec 29 12:11:09.094: INFO: Pod "pod-45e290e5-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:11:09.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-45e290e5-2a34-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:11:09.873: INFO: Waiting for pod pod-45e290e5-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:11:10.116: INFO: Pod pod-45e290e5-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:11:10.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-78d8q" for this suite.
Dec 29 12:11:16.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:11:16.297: INFO: namespace: e2e-tests-emptydir-78d8q, resource: bindings, ignored listing per whitelist
Dec 29 12:11:16.367: INFO: namespace e2e-tests-emptydir-78d8q deletion completed in 6.234757338s

• [SLOW TEST:17.949 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:11:16.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 12:11:16.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2v5fd'
Dec 29 12:11:16.886: INFO: stderr: ""
Dec 29 12:11:16.887: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 29 12:11:16.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2v5fd'
Dec 29 12:11:22.693: INFO: stderr: ""
Dec 29 12:11:22.693: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:11:22.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2v5fd" for this suite.
Dec 29 12:11:28.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:11:28.836: INFO: namespace: e2e-tests-kubectl-2v5fd, resource: bindings, ignored listing per whitelist
Dec 29 12:11:28.946: INFO: namespace e2e-tests-kubectl-2v5fd deletion completed in 6.242647452s

• [SLOW TEST:12.579 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:11:28.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 29 12:11:41.226: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-580365c4-2a34-11ea-9252-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-z7xhk", SelfLink:"/api/v1/namespaces/e2e-tests-pods-z7xhk/pods/pod-submit-remove-580365c4-2a34-11ea-9252-0242ac110005", UID:"5808e7c8-2a34-11ea-a994-fa163e34d433", ResourceVersion:"16459354", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713218289, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"150529469"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7rs6g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0012bf4c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7rs6g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018e2b78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001681860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018e2f10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018e2f50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0018e2f58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018e2f5c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713218289, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713218299, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713218299, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713218289, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000db73c0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000db73e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://38791a993040d0c1e26af72e5a298b224ed1384297332c54a9b52c208fc7eeb4"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:11:52.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-z7xhk" for this suite.
Dec 29 12:11:58.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:11:58.838: INFO: namespace: e2e-tests-pods-z7xhk, resource: bindings, ignored listing per whitelist
Dec 29 12:11:59.028: INFO: namespace e2e-tests-pods-z7xhk deletion completed in 6.314590293s

• [SLOW TEST:30.081 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:11:59.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 29 12:12:09.869: INFO: Successfully updated pod "annotationupdate69e644a5-2a34-11ea-9252-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:12:12.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wrzql" for this suite.
Dec 29 12:12:36.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:12:36.320: INFO: namespace: e2e-tests-projected-wrzql, resource: bindings, ignored listing per whitelist
Dec 29 12:12:36.564: INFO: namespace e2e-tests-projected-wrzql deletion completed in 24.527447176s

• [SLOW TEST:37.536 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:12:36.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 29 12:12:36.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-p4np8 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 29 12:12:48.227: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 29 12:12:48.227: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:12:50.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p4np8" for this suite.
Dec 29 12:12:57.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:12:57.190: INFO: namespace: e2e-tests-kubectl-p4np8, resource: bindings, ignored listing per whitelist
Dec 29 12:12:57.400: INFO: namespace e2e-tests-kubectl-p4np8 deletion completed in 6.404628317s

• [SLOW TEST:20.834 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:12:57.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 29 12:12:57.768: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459523,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 29 12:12:57.768: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459524,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 29 12:12:57.768: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459525,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 29 12:13:07.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459539,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 29 12:13:07.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459540,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 29 12:13:07.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-924xf,SelfLink:/api/v1/namespaces/e2e-tests-watch-924xf/configmaps/e2e-watch-test-label-changed,UID:8cb9d80a-2a34-11ea-a994-fa163e34d433,ResourceVersion:16459541,Generation:0,CreationTimestamp:2019-12-29 12:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:13:07.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-924xf" for this suite.
Dec 29 12:13:13.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:13:14.100: INFO: namespace: e2e-tests-watch-924xf, resource: bindings, ignored listing per whitelist
Dec 29 12:13:14.134: INFO: namespace e2e-tests-watch-924xf deletion completed in 6.231809421s

• [SLOW TEST:16.734 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:13:14.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-96b7c9a8-2a34-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:13:14.374: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-62pfn" to be "success or failure"
Dec 29 12:13:14.549: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 174.287344ms
Dec 29 12:13:16.928: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554017364s
Dec 29 12:13:18.950: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575855407s
Dec 29 12:13:20.968: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593245478s
Dec 29 12:13:22.985: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610263107s
Dec 29 12:13:24.998: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.623633481s
STEP: Saw pod success
Dec 29 12:13:24.998: INFO: Pod "pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:13:25.004: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 29 12:13:25.111: INFO: Waiting for pod pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:13:25.142: INFO: Pod pod-projected-secrets-96b94682-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:13:25.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-62pfn" for this suite.
Dec 29 12:13:31.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:13:31.510: INFO: namespace: e2e-tests-projected-62pfn, resource: bindings, ignored listing per whitelist
Dec 29 12:13:31.643: INFO: namespace e2e-tests-projected-62pfn deletion completed in 6.407901365s

• [SLOW TEST:17.508 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:13:31.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 29 12:13:32.037: INFO: Waiting up to 5m0s for pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-dtdv8" to be "success or failure"
Dec 29 12:13:32.044: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704623ms
Dec 29 12:13:34.105: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067710035s
Dec 29 12:13:36.132: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094177191s
Dec 29 12:13:38.658: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620212466s
Dec 29 12:13:40.666: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.628093227s
Dec 29 12:13:42.687: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.649226507s
STEP: Saw pod success
Dec 29 12:13:42.687: INFO: Pod "pod-a12b5f11-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:13:42.697: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a12b5f11-2a34-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:13:43.956: INFO: Waiting for pod pod-a12b5f11-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:13:43.976: INFO: Pod pod-a12b5f11-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:13:43.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dtdv8" for this suite.
Dec 29 12:13:50.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:13:50.114: INFO: namespace: e2e-tests-emptydir-dtdv8, resource: bindings, ignored listing per whitelist
Dec 29 12:13:50.181: INFO: namespace e2e-tests-emptydir-dtdv8 deletion completed in 6.190665417s

• [SLOW TEST:18.538 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:13:50.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 29 12:13:50.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:13:50.639: INFO: stderr: ""
Dec 29 12:13:50.639: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 12:13:50.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:13:50.848: INFO: stderr: ""
Dec 29 12:13:50.848: INFO: stdout: "update-demo-nautilus-vkxtm "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 29 12:13:55.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:13:55.996: INFO: stderr: ""
Dec 29 12:13:55.996: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-vkxtm "
Dec 29 12:13:55.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:13:56.103: INFO: stderr: ""
Dec 29 12:13:56.104: INFO: stdout: ""
Dec 29 12:13:56.104: INFO: update-demo-nautilus-8n6gc is created but not running
Dec 29 12:14:01.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:01.283: INFO: stderr: ""
Dec 29 12:14:01.284: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-vkxtm "
Dec 29 12:14:01.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:01.394: INFO: stderr: ""
Dec 29 12:14:01.394: INFO: stdout: ""
Dec 29 12:14:01.394: INFO: update-demo-nautilus-8n6gc is created but not running
Dec 29 12:14:06.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:06.687: INFO: stderr: ""
Dec 29 12:14:06.687: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-vkxtm "
Dec 29 12:14:06.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:06.795: INFO: stderr: ""
Dec 29 12:14:06.795: INFO: stdout: "true"
Dec 29 12:14:06.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:06.932: INFO: stderr: ""
Dec 29 12:14:06.932: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:06.932: INFO: validating pod update-demo-nautilus-8n6gc
Dec 29 12:14:06.977: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:06.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:06.977: INFO: update-demo-nautilus-8n6gc is verified up and running
Dec 29 12:14:06.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkxtm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:07.118: INFO: stderr: ""
Dec 29 12:14:07.118: INFO: stdout: "true"
Dec 29 12:14:07.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkxtm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:07.228: INFO: stderr: ""
Dec 29 12:14:07.228: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:07.228: INFO: validating pod update-demo-nautilus-vkxtm
Dec 29 12:14:07.239: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:07.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:07.239: INFO: update-demo-nautilus-vkxtm is verified up and running
STEP: scaling down the replication controller
Dec 29 12:14:07.243: INFO: scanned /root for discovery docs: 
Dec 29 12:14:07.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:08.700: INFO: stderr: ""
Dec 29 12:14:08.700: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 12:14:08.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:08.917: INFO: stderr: ""
Dec 29 12:14:08.917: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-vkxtm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 29 12:14:13.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:14.131: INFO: stderr: ""
Dec 29 12:14:14.131: INFO: stdout: "update-demo-nautilus-8n6gc "
Dec 29 12:14:14.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:14.247: INFO: stderr: ""
Dec 29 12:14:14.248: INFO: stdout: "true"
Dec 29 12:14:14.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:14.366: INFO: stderr: ""
Dec 29 12:14:14.366: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:14.366: INFO: validating pod update-demo-nautilus-8n6gc
Dec 29 12:14:14.375: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:14.375: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:14.375: INFO: update-demo-nautilus-8n6gc is verified up and running
STEP: scaling up the replication controller
Dec 29 12:14:14.378: INFO: scanned /root for discovery docs: 
Dec 29 12:14:14.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:15.720: INFO: stderr: ""
Dec 29 12:14:15.721: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 12:14:15.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:16.421: INFO: stderr: ""
Dec 29 12:14:16.422: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-qkrmt "
Dec 29 12:14:16.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:16.551: INFO: stderr: ""
Dec 29 12:14:16.552: INFO: stdout: "true"
Dec 29 12:14:16.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:16.647: INFO: stderr: ""
Dec 29 12:14:16.647: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:16.647: INFO: validating pod update-demo-nautilus-8n6gc
Dec 29 12:14:16.655: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:16.655: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:16.655: INFO: update-demo-nautilus-8n6gc is verified up and running
Dec 29 12:14:16.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qkrmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:16.745: INFO: stderr: ""
Dec 29 12:14:16.746: INFO: stdout: ""
Dec 29 12:14:16.746: INFO: update-demo-nautilus-qkrmt is created but not running
Dec 29 12:14:21.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:22.000: INFO: stderr: ""
Dec 29 12:14:22.000: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-qkrmt "
Dec 29 12:14:22.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:22.156: INFO: stderr: ""
Dec 29 12:14:22.156: INFO: stdout: "true"
Dec 29 12:14:22.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:22.338: INFO: stderr: ""
Dec 29 12:14:22.338: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:22.338: INFO: validating pod update-demo-nautilus-8n6gc
Dec 29 12:14:22.354: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:22.354: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:22.354: INFO: update-demo-nautilus-8n6gc is verified up and running
Dec 29 12:14:22.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qkrmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:22.535: INFO: stderr: ""
Dec 29 12:14:22.536: INFO: stdout: ""
Dec 29 12:14:22.536: INFO: update-demo-nautilus-qkrmt is created but not running
Dec 29 12:14:27.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:27.697: INFO: stderr: ""
Dec 29 12:14:27.697: INFO: stdout: "update-demo-nautilus-8n6gc update-demo-nautilus-qkrmt "
Dec 29 12:14:27.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:27.838: INFO: stderr: ""
Dec 29 12:14:27.838: INFO: stdout: "true"
Dec 29 12:14:27.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:27.968: INFO: stderr: ""
Dec 29 12:14:27.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:27.968: INFO: validating pod update-demo-nautilus-8n6gc
Dec 29 12:14:27.982: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:27.982: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:27.982: INFO: update-demo-nautilus-8n6gc is verified up and running
Dec 29 12:14:27.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qkrmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:28.086: INFO: stderr: ""
Dec 29 12:14:28.086: INFO: stdout: "true"
Dec 29 12:14:28.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qkrmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:28.197: INFO: stderr: ""
Dec 29 12:14:28.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:14:28.197: INFO: validating pod update-demo-nautilus-qkrmt
Dec 29 12:14:28.209: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:14:28.210: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:14:28.210: INFO: update-demo-nautilus-qkrmt is verified up and running
STEP: using delete to clean up resources
Dec 29 12:14:28.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:28.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:14:28.317: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 29 12:14:28.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7hkhj'
Dec 29 12:14:28.510: INFO: stderr: "No resources found.\n"
Dec 29 12:14:28.511: INFO: stdout: ""
Dec 29 12:14:28.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7hkhj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 29 12:14:28.674: INFO: stderr: ""
Dec 29 12:14:28.675: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:14:28.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7hkhj" for this suite.
Dec 29 12:14:52.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:14:52.910: INFO: namespace: e2e-tests-kubectl-7hkhj, resource: bindings, ignored listing per whitelist
Dec 29 12:14:52.984: INFO: namespace e2e-tests-kubectl-7hkhj deletion completed in 24.276763409s

• [SLOW TEST:62.803 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:14:52.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d1aac4a0-2a34-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 12:14:53.535: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-lvk4j" to be "success or failure"
Dec 29 12:14:53.558: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.297136ms
Dec 29 12:14:56.556: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02088878s
Dec 29 12:14:58.611: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.075621971s
Dec 29 12:15:00.896: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.360139563s
Dec 29 12:15:02.942: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.406304317s
Dec 29 12:15:04.958: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.422765953s
STEP: Saw pod success
Dec 29 12:15:04.958: INFO: Pod "pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:15:04.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 12:15:05.106: INFO: Waiting for pod pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005 to disappear
Dec 29 12:15:05.133: INFO: Pod pod-projected-configmaps-d1b17b77-2a34-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:15:05.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lvk4j" for this suite.
Dec 29 12:15:11.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:15:11.367: INFO: namespace: e2e-tests-projected-lvk4j, resource: bindings, ignored listing per whitelist
Dec 29 12:15:11.417: INFO: namespace e2e-tests-projected-lvk4j deletion completed in 6.20278761s

• [SLOW TEST:18.433 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:15:11.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:15:11.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:15:19.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8c64z" for this suite.
Dec 29 12:16:05.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:16:05.840: INFO: namespace: e2e-tests-pods-8c64z, resource: bindings, ignored listing per whitelist
Dec 29 12:16:05.961: INFO: namespace e2e-tests-pods-8c64z deletion completed in 46.209566123s

• [SLOW TEST:54.543 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:16:05.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:16:18.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mrxp4" for this suite.
Dec 29 12:16:26.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:16:26.582: INFO: namespace: e2e-tests-kubelet-test-mrxp4, resource: bindings, ignored listing per whitelist
Dec 29 12:16:26.686: INFO: namespace e2e-tests-kubelet-test-mrxp4 deletion completed in 8.306794855s

• [SLOW TEST:20.724 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:16:26.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:16:26.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-lsmvt" to be "success or failure"
Dec 29 12:16:26.943: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.031872ms
Dec 29 12:16:28.968: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146338471s
Dec 29 12:16:30.984: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162527405s
Dec 29 12:16:33.657: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.8352742s
Dec 29 12:16:35.711: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.889111006s
Dec 29 12:16:37.722: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.900386645s
STEP: Saw pod success
Dec 29 12:16:37.722: INFO: Pod "downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:16:37.727: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:16:38.871: INFO: Waiting for pod downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:16:38.895: INFO: Pod downwardapi-volume-096c68af-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:16:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lsmvt" for this suite.
Dec 29 12:16:45.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:16:45.210: INFO: namespace: e2e-tests-downward-api-lsmvt, resource: bindings, ignored listing per whitelist
Dec 29 12:16:45.232: INFO: namespace e2e-tests-downward-api-lsmvt deletion completed in 6.314384686s

• [SLOW TEST:18.546 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:16:45.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:16:45.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-xcrfb" to be "success or failure"
Dec 29 12:16:45.530: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.353278ms
Dec 29 12:16:47.563: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050746022s
Dec 29 12:16:49.589: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077179566s
Dec 29 12:16:51.768: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256413935s
Dec 29 12:16:53.909: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397034294s
Dec 29 12:16:55.921: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409225344s
Dec 29 12:16:57.938: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.426342167s
STEP: Saw pod success
Dec 29 12:16:57.938: INFO: Pod "downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:16:57.942: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:16:59.022: INFO: Waiting for pod downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:16:59.039: INFO: Pod downwardapi-volume-147e7428-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:16:59.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xcrfb" for this suite.
Dec 29 12:17:05.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:17:05.304: INFO: namespace: e2e-tests-projected-xcrfb, resource: bindings, ignored listing per whitelist
Dec 29 12:17:05.336: INFO: namespace e2e-tests-projected-xcrfb deletion completed in 6.287767956s

• [SLOW TEST:20.104 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:17:05.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 29 12:17:05.652: INFO: Waiting up to 5m0s for pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-xrf4l" to be "success or failure"
Dec 29 12:17:05.696: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.640198ms
Dec 29 12:17:07.814: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161283674s
Dec 29 12:17:09.843: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190109366s
Dec 29 12:17:12.063: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410460833s
Dec 29 12:17:14.074: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421747108s
Dec 29 12:17:16.094: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441685938s
STEP: Saw pod success
Dec 29 12:17:16.094: INFO: Pod "downward-api-208f7720-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:17:16.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-208f7720-2a35-11ea-9252-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 29 12:17:16.214: INFO: Waiting for pod downward-api-208f7720-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:17:16.388: INFO: Pod downward-api-208f7720-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:17:16.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xrf4l" for this suite.
Dec 29 12:17:22.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:17:22.672: INFO: namespace: e2e-tests-downward-api-xrf4l, resource: bindings, ignored listing per whitelist
Dec 29 12:17:22.720: INFO: namespace e2e-tests-downward-api-xrf4l deletion completed in 6.313131815s

• [SLOW TEST:17.383 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:17:22.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:17:23.067: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.63733ms)
Dec 29 12:17:23.074: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.187274ms)
Dec 29 12:17:23.080: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.758185ms)
Dec 29 12:17:23.084: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.838688ms)
Dec 29 12:17:23.090: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.450892ms)
Dec 29 12:17:23.204: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 113.549283ms)
Dec 29 12:17:23.215: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.777319ms)
Dec 29 12:17:23.222: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.980266ms)
Dec 29 12:17:23.227: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.360864ms)
Dec 29 12:17:23.236: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.171119ms)
Dec 29 12:17:23.247: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.857586ms)
Dec 29 12:17:23.255: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.585071ms)
Dec 29 12:17:23.259: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.145932ms)
Dec 29 12:17:23.265: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.758548ms)
Dec 29 12:17:23.270: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.97232ms)
Dec 29 12:17:23.275: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.800428ms)
Dec 29 12:17:23.280: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.768829ms)
Dec 29 12:17:23.285: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.788324ms)
Dec 29 12:17:23.290: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.769557ms)
Dec 29 12:17:23.294: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.966995ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:17:23.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-fvd68" for this suite.
Dec 29 12:17:29.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:17:29.415: INFO: namespace: e2e-tests-proxy-fvd68, resource: bindings, ignored listing per whitelist
Dec 29 12:17:29.490: INFO: namespace e2e-tests-proxy-fvd68 deletion completed in 6.19159627s

• [SLOW TEST:6.769 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:17:29.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 29 12:17:29.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 29 12:17:29.697: INFO: Waiting for terminating namespaces to be deleted...
Dec 29 12:17:29.702: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 29 12:17:29.719: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:17:29.719: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:17:29.719: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:17:29.719: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:17:29.719: INFO: 	Container coredns ready: true, restart count 0
Dec 29 12:17:29.719: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 29 12:17:29.719: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 29 12:17:29.719: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:17:29.719: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 29 12:17:29.719: INFO: 	Container weave ready: true, restart count 0
Dec 29 12:17:29.719: INFO: 	Container weave-npc ready: true, restart count 0
Dec 29 12:17:29.719: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:17:29.719: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e4d6b6c862f1b9], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:17:31.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qblk2" for this suite.
Dec 29 12:17:37.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:17:37.711: INFO: namespace: e2e-tests-sched-pred-qblk2, resource: bindings, ignored listing per whitelist
Dec 29 12:17:37.724: INFO: namespace e2e-tests-sched-pred-qblk2 deletion completed in 6.303090671s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:8.235 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:17:37.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-33e0294e-2a35-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:17:38.095: INFO: Waiting up to 5m0s for pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-nhhzf" to be "success or failure"
Dec 29 12:17:38.216: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.309327ms
Dec 29 12:17:40.380: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284842794s
Dec 29 12:17:42.425: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330251053s
Dec 29 12:17:44.455: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360671182s
Dec 29 12:17:46.483: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388789419s
Dec 29 12:17:48.513: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.418674364s
STEP: Saw pod success
Dec 29 12:17:48.514: INFO: Pod "pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:17:48.537: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 12:17:48.765: INFO: Waiting for pod pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:17:48.865: INFO: Pod pod-secrets-33ea9289-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:17:48.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nhhzf" for this suite.
Dec 29 12:17:56.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:17:57.080: INFO: namespace: e2e-tests-secrets-nhhzf, resource: bindings, ignored listing per whitelist
Dec 29 12:17:57.087: INFO: namespace e2e-tests-secrets-nhhzf deletion completed in 8.203404778s
STEP: Destroying namespace "e2e-tests-secret-namespace-z4gkh" for this suite.
Dec 29 12:18:03.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:18:03.197: INFO: namespace: e2e-tests-secret-namespace-z4gkh, resource: bindings, ignored listing per whitelist
Dec 29 12:18:03.302: INFO: namespace e2e-tests-secret-namespace-z4gkh deletion completed in 6.214648487s

• [SLOW TEST:25.577 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:18:03.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 29 12:18:03.475: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:18:21.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-hgb2n" for this suite.
Dec 29 12:18:27.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:18:27.610: INFO: namespace: e2e-tests-init-container-hgb2n, resource: bindings, ignored listing per whitelist
Dec 29 12:18:27.758: INFO: namespace e2e-tests-init-container-hgb2n deletion completed in 6.252838769s

• [SLOW TEST:24.456 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:18:27.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:18:28.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-bcwbr" to be "success or failure"
Dec 29 12:18:28.048: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.168722ms
Dec 29 12:18:30.065: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031768706s
Dec 29 12:18:32.095: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061546571s
Dec 29 12:18:34.250: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21662009s
Dec 29 12:18:36.268: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234962391s
Dec 29 12:18:38.295: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.262425525s
STEP: Saw pod success
Dec 29 12:18:38.296: INFO: Pod "downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:18:38.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:18:39.493: INFO: Waiting for pod downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:18:39.510: INFO: Pod downwardapi-volume-51aa8381-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:18:39.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bcwbr" for this suite.
Dec 29 12:18:45.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:18:45.647: INFO: namespace: e2e-tests-projected-bcwbr, resource: bindings, ignored listing per whitelist
Dec 29 12:18:45.917: INFO: namespace e2e-tests-projected-bcwbr deletion completed in 6.39728465s

• [SLOW TEST:18.160 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:18:45.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:18:46.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-9hgqf" to be "success or failure"
Dec 29 12:18:46.170: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.447794ms
Dec 29 12:18:48.278: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129351128s
Dec 29 12:18:50.299: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150721034s
Dec 29 12:18:52.766: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617531704s
Dec 29 12:18:54.798: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650001698s
Dec 29 12:18:56.978: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.829419173s
STEP: Saw pod success
Dec 29 12:18:56.978: INFO: Pod "downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:18:57.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:18:57.206: INFO: Waiting for pod downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:18:57.222: INFO: Pod downwardapi-volume-5c78c981-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:18:57.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9hgqf" for this suite.
Dec 29 12:19:03.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:19:03.389: INFO: namespace: e2e-tests-downward-api-9hgqf, resource: bindings, ignored listing per whitelist
Dec 29 12:19:03.458: INFO: namespace e2e-tests-downward-api-9hgqf deletion completed in 6.22984405s

• [SLOW TEST:17.539 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:19:03.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 29 12:19:04.302: INFO: created pod pod-service-account-defaultsa
Dec 29 12:19:04.302: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 29 12:19:04.329: INFO: created pod pod-service-account-mountsa
Dec 29 12:19:04.329: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 29 12:19:04.595: INFO: created pod pod-service-account-nomountsa
Dec 29 12:19:04.595: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 29 12:19:04.614: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 29 12:19:04.614: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 29 12:19:04.814: INFO: created pod pod-service-account-mountsa-mountspec
Dec 29 12:19:04.814: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 29 12:19:04.931: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 29 12:19:04.931: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 29 12:19:05.062: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 29 12:19:05.063: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 29 12:19:05.078: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 29 12:19:05.078: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 29 12:19:06.251: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 29 12:19:06.252: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:19:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-ggxtk" for this suite.
Dec 29 12:19:36.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:19:36.441: INFO: namespace: e2e-tests-svcaccounts-ggxtk, resource: bindings, ignored listing per whitelist
Dec 29 12:19:36.481: INFO: namespace e2e-tests-svcaccounts-ggxtk deletion completed in 29.361074926s

• [SLOW TEST:33.024 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:19:36.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 29 12:19:36.752: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:19:36.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6vrl4" for this suite.
Dec 29 12:19:42.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:19:43.160: INFO: namespace: e2e-tests-kubectl-6vrl4, resource: bindings, ignored listing per whitelist
Dec 29 12:19:43.174: INFO: namespace e2e-tests-kubectl-6vrl4 deletion completed in 6.284435291s

• [SLOW TEST:6.692 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:19:43.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 29 12:19:43.506: INFO: Waiting up to 5m0s for pod "pod-7ea98166-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-cxqz8" to be "success or failure"
Dec 29 12:19:43.518: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.927194ms
Dec 29 12:19:46.005: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498712077s
Dec 29 12:19:48.035: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528574394s
Dec 29 12:19:50.049: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.542322053s
Dec 29 12:19:52.071: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56437535s
Dec 29 12:19:54.088: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.581477659s
STEP: Saw pod success
Dec 29 12:19:54.088: INFO: Pod "pod-7ea98166-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:19:54.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7ea98166-2a35-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:19:54.711: INFO: Waiting for pod pod-7ea98166-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:19:54.971: INFO: Pod pod-7ea98166-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:19:54.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cxqz8" for this suite.
Dec 29 12:20:01.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:20:01.175: INFO: namespace: e2e-tests-emptydir-cxqz8, resource: bindings, ignored listing per whitelist
Dec 29 12:20:01.232: INFO: namespace e2e-tests-emptydir-cxqz8 deletion completed in 6.249468103s

• [SLOW TEST:18.058 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:20:01.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:20:01.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-8fs2t" to be "success or failure"
Dec 29 12:20:01.454: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.321511ms
Dec 29 12:20:03.473: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055663595s
Dec 29 12:20:05.484: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066556944s
Dec 29 12:20:07.498: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080507764s
Dec 29 12:20:09.507: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089669115s
Dec 29 12:20:12.108: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.690993999s
STEP: Saw pod success
Dec 29 12:20:12.109: INFO: Pod "downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:20:12.124: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:20:12.816: INFO: Waiting for pod downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:20:12.826: INFO: Pod downwardapi-volume-8957c5cf-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:20:12.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8fs2t" for this suite.
Dec 29 12:20:18.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:20:19.110: INFO: namespace: e2e-tests-downward-api-8fs2t, resource: bindings, ignored listing per whitelist
Dec 29 12:20:19.116: INFO: namespace e2e-tests-downward-api-8fs2t deletion completed in 6.284982024s

• [SLOW TEST:17.883 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:20:19.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 29 12:20:19.325: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 29 12:20:19.337: INFO: Waiting for terminating namespaces to be deleted...
Dec 29 12:20:19.341: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 29 12:20:19.360: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 29 12:20:19.360: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 29 12:20:19.360: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:20:19.360: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 29 12:20:19.361: INFO: 	Container weave ready: true, restart count 0
Dec 29 12:20:19.361: INFO: 	Container weave-npc ready: true, restart count 0
Dec 29 12:20:19.361: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:20:19.361: INFO: 	Container coredns ready: true, restart count 0
Dec 29 12:20:19.361: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:20:19.361: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:20:19.361: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:20:19.361: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:20:19.361: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.495: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 29 12:20:19.496: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-941fbfc0-2a35-11ea-9252-0242ac110005.15e4d6de4a86f0f8], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-7h5f8/filler-pod-941fbfc0-2a35-11ea-9252-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-941fbfc0-2a35-11ea-9252-0242ac110005.15e4d6df73ed508d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-941fbfc0-2a35-11ea-9252-0242ac110005.15e4d6e01775b148], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-941fbfc0-2a35-11ea-9252-0242ac110005.15e4d6e04fc8122b], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e4d6e0a400d159], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:20:30.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7h5f8" for this suite.
Dec 29 12:20:39.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:20:39.231: INFO: namespace: e2e-tests-sched-pred-7h5f8, resource: bindings, ignored listing per whitelist
Dec 29 12:20:39.279: INFO: namespace e2e-tests-sched-pred-7h5f8 deletion completed in 8.311174433s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.163 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:20:39.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:20:39.627: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 29 12:20:39.816: INFO: Number of nodes with available pods: 0
Dec 29 12:20:39.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:41.224: INFO: Number of nodes with available pods: 0
Dec 29 12:20:41.224: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:42.245: INFO: Number of nodes with available pods: 0
Dec 29 12:20:42.245: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:42.842: INFO: Number of nodes with available pods: 0
Dec 29 12:20:42.842: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:43.846: INFO: Number of nodes with available pods: 0
Dec 29 12:20:43.846: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:44.884: INFO: Number of nodes with available pods: 0
Dec 29 12:20:44.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:45.872: INFO: Number of nodes with available pods: 0
Dec 29 12:20:45.872: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:47.057: INFO: Number of nodes with available pods: 0
Dec 29 12:20:47.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:47.891: INFO: Number of nodes with available pods: 0
Dec 29 12:20:47.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:48.889: INFO: Number of nodes with available pods: 0
Dec 29 12:20:48.889: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:49.869: INFO: Number of nodes with available pods: 0
Dec 29 12:20:49.869: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:20:50.852: INFO: Number of nodes with available pods: 1
Dec 29 12:20:50.852: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 29 12:20:51.071: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:52.123: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:53.133: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:54.146: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:55.127: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:56.125: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:57.167: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:57.167: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:20:58.124: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:58.124: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:20:59.140: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:20:59.140: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:21:00.140: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:21:00.140: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:21:01.136: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:21:01.136: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:21:02.162: INFO: Wrong image for pod: daemon-set-l4tsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 29 12:21:02.163: INFO: Pod daemon-set-l4tsf is not available
Dec 29 12:21:03.125: INFO: Pod daemon-set-457vg is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 29 12:21:03.151: INFO: Number of nodes with available pods: 0
Dec 29 12:21:03.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:04.328: INFO: Number of nodes with available pods: 0
Dec 29 12:21:04.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:05.201: INFO: Number of nodes with available pods: 0
Dec 29 12:21:05.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:06.186: INFO: Number of nodes with available pods: 0
Dec 29 12:21:06.187: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:07.189: INFO: Number of nodes with available pods: 0
Dec 29 12:21:07.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:08.182: INFO: Number of nodes with available pods: 0
Dec 29 12:21:08.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:09.184: INFO: Number of nodes with available pods: 0
Dec 29 12:21:09.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:10.191: INFO: Number of nodes with available pods: 0
Dec 29 12:21:10.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:11.185: INFO: Number of nodes with available pods: 0
Dec 29 12:21:11.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:21:12.174: INFO: Number of nodes with available pods: 1
Dec 29 12:21:12.174: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zd8p4, will wait for the garbage collector to delete the pods
Dec 29 12:21:12.278: INFO: Deleting DaemonSet.extensions daemon-set took: 20.726026ms
Dec 29 12:21:12.479: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.216287ms
Dec 29 12:21:19.517: INFO: Number of nodes with available pods: 0
Dec 29 12:21:19.517: INFO: Number of running nodes: 0, number of available pods: 0
Dec 29 12:21:19.539: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zd8p4/daemonsets","resourceVersion":"16460765"},"items":null}

Dec 29 12:21:19.550: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zd8p4/pods","resourceVersion":"16460766"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:21:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zd8p4" for this suite.
Dec 29 12:21:25.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:21:25.950: INFO: namespace: e2e-tests-daemonsets-zd8p4, resource: bindings, ignored listing per whitelist
Dec 29 12:21:25.959: INFO: namespace e2e-tests-daemonsets-zd8p4 deletion completed in 6.389186648s

• [SLOW TEST:46.680 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:21:25.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 29 12:21:26.382: INFO: Waiting up to 5m0s for pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-nns5b" to be "success or failure"
Dec 29 12:21:26.402: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.333303ms
Dec 29 12:21:28.426: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043840052s
Dec 29 12:21:30.460: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077450303s
Dec 29 12:21:32.532: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149834176s
Dec 29 12:21:34.567: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184734523s
Dec 29 12:21:36.590: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.20716722s
STEP: Saw pod success
Dec 29 12:21:36.590: INFO: Pod "pod-bbeb0a41-2a35-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:21:36.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bbeb0a41-2a35-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:21:36.797: INFO: Waiting for pod pod-bbeb0a41-2a35-11ea-9252-0242ac110005 to disappear
Dec 29 12:21:36.832: INFO: Pod pod-bbeb0a41-2a35-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:21:36.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nns5b" for this suite.
Dec 29 12:21:42.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:21:42.988: INFO: namespace: e2e-tests-emptydir-nns5b, resource: bindings, ignored listing per whitelist
Dec 29 12:21:43.039: INFO: namespace e2e-tests-emptydir-nns5b deletion completed in 6.190104024s

• [SLOW TEST:17.079 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:21:43.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 29 12:22:03.451: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:03.553: INFO: Pod pod-with-prestop-http-hook still exists
Dec 29 12:22:05.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:05.573: INFO: Pod pod-with-prestop-http-hook still exists
Dec 29 12:22:07.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:07.574: INFO: Pod pod-with-prestop-http-hook still exists
Dec 29 12:22:09.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:09.576: INFO: Pod pod-with-prestop-http-hook still exists
Dec 29 12:22:11.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:11.602: INFO: Pod pod-with-prestop-http-hook still exists
Dec 29 12:22:13.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 29 12:22:13.593: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:22:13.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-64pzx" for this suite.
Dec 29 12:22:37.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:22:37.927: INFO: namespace: e2e-tests-container-lifecycle-hook-64pzx, resource: bindings, ignored listing per whitelist
Dec 29 12:22:38.005: INFO: namespace e2e-tests-container-lifecycle-hook-64pzx deletion completed in 24.286537662s

• [SLOW TEST:54.966 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:22:38.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e6e11b58-2a35-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e6e11b58-2a35-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:24:03.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g765q" for this suite.
Dec 29 12:24:29.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:24:29.345: INFO: namespace: e2e-tests-projected-g765q, resource: bindings, ignored listing per whitelist
Dec 29 12:24:29.497: INFO: namespace e2e-tests-projected-g765q deletion completed in 26.244803724s

• [SLOW TEST:111.491 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:24:29.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 29 12:24:41.964: INFO: Pod pod-hostip-294ed87e-2a36-11ea-9252-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:24:41.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vzxlv" for this suite.
Dec 29 12:25:06.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:25:06.091: INFO: namespace: e2e-tests-pods-vzxlv, resource: bindings, ignored listing per whitelist
Dec 29 12:25:06.328: INFO: namespace e2e-tests-pods-vzxlv deletion completed in 24.354078146s

• [SLOW TEST:36.831 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:25:06.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:25:06.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-qdc5b" to be "success or failure"
Dec 29 12:25:06.645: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.762235ms
Dec 29 12:25:08.688: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05429488s
Dec 29 12:25:10.757: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122905854s
Dec 29 12:25:13.251: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617689253s
Dec 29 12:25:15.275: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.641444896s
Dec 29 12:25:17.293: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.658900375s
STEP: Saw pod success
Dec 29 12:25:17.293: INFO: Pod "downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:25:17.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:25:19.020: INFO: Waiting for pod downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005 to disappear
Dec 29 12:25:19.058: INFO: Pod downwardapi-volume-3f39cdc1-2a36-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:25:19.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qdc5b" for this suite.
Dec 29 12:25:25.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:25:25.303: INFO: namespace: e2e-tests-projected-qdc5b, resource: bindings, ignored listing per whitelist
Dec 29 12:25:25.422: INFO: namespace e2e-tests-projected-qdc5b deletion completed in 6.305642781s

• [SLOW TEST:19.093 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:25:25.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6ft2t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6ft2t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6ft2t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.165_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6ft2t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6ft2t.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6ft2t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.12.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.12.165_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 29 12:25:41.970: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:41.975: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:41.985: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6ft2t from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:41.993: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.001: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.009: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.014: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.032: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.036: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.041: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.045: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005)
Dec 29 12:25:42.059: INFO: Lookups using e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6ft2t jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t jessie_udp@dns-test-service.e2e-tests-dns-6ft2t.svc jessie_tcp@dns-test-service.e2e-tests-dns-6ft2t.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6ft2t.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6ft2t.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 29 12:25:47.177: INFO: DNS probes using e2e-tests-dns-6ft2t/dns-test-4aab1ad6-2a36-11ea-9252-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:25:47.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-6ft2t" for this suite.
Dec 29 12:25:53.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:25:53.974: INFO: namespace: e2e-tests-dns-6ft2t, resource: bindings, ignored listing per whitelist
Dec 29 12:25:54.039: INFO: namespace e2e-tests-dns-6ft2t deletion completed in 6.455396889s

• [SLOW TEST:28.617 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:25:54.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1229 12:25:56.786903       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 12:25:56.787: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:25:56.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-snt86" for this suite.
Dec 29 12:26:05.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:26:05.309: INFO: namespace: e2e-tests-gc-snt86, resource: bindings, ignored listing per whitelist
Dec 29 12:26:05.347: INFO: namespace e2e-tests-gc-snt86 deletion completed in 8.552994496s

• [SLOW TEST:11.307 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:26:05.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 29 12:26:05.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rzrbg,SelfLink:/api/v1/namespaces/e2e-tests-watch-rzrbg/configmaps/e2e-watch-test-resource-version,UID:626acc31-2a36-11ea-a994-fa163e34d433,ResourceVersion:16461354,Generation:0,CreationTimestamp:2019-12-29 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 29 12:26:05.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rzrbg,SelfLink:/api/v1/namespaces/e2e-tests-watch-rzrbg/configmaps/e2e-watch-test-resource-version,UID:626acc31-2a36-11ea-a994-fa163e34d433,ResourceVersion:16461355,Generation:0,CreationTimestamp:2019-12-29 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:26:05.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-rzrbg" for this suite.
Dec 29 12:26:11.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:26:12.065: INFO: namespace: e2e-tests-watch-rzrbg, resource: bindings, ignored listing per whitelist
Dec 29 12:26:12.087: INFO: namespace e2e-tests-watch-rzrbg deletion completed in 6.253578014s

• [SLOW TEST:6.740 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:26:12.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 29 12:26:23.446: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:26:24.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-mx6r4" for this suite.
Dec 29 12:26:51.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:26:51.606: INFO: namespace: e2e-tests-replicaset-mx6r4, resource: bindings, ignored listing per whitelist
Dec 29 12:26:51.644: INFO: namespace e2e-tests-replicaset-mx6r4 deletion completed in 27.109562076s

• [SLOW TEST:39.556 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:26:51.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9m9xl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 29 12:26:51.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 29 12:27:36.445: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9m9xl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 12:27:36.445: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 12:27:36.994: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:27:36.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-9m9xl" for this suite.
Dec 29 12:28:01.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:28:01.419: INFO: namespace: e2e-tests-pod-network-test-9m9xl, resource: bindings, ignored listing per whitelist
Dec 29 12:28:01.430: INFO: namespace e2e-tests-pod-network-test-9m9xl deletion completed in 24.404212493s

• [SLOW TEST:69.786 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:28:01.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a7bb8411-2a36-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 12:28:01.947: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-57sp4" to be "success or failure"
Dec 29 12:28:01.963: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064367ms
Dec 29 12:28:04.034: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08655349s
Dec 29 12:28:06.822: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874662663s
Dec 29 12:28:08.862: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915117829s
Dec 29 12:28:11.068: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.120812867s
Dec 29 12:28:13.091: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.143568509s
STEP: Saw pod success
Dec 29 12:28:13.091: INFO: Pod "pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:28:13.098: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 12:28:14.342: INFO: Waiting for pod pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005 to disappear
Dec 29 12:28:14.781: INFO: Pod pod-configmaps-a7bf8543-2a36-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:28:14.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-57sp4" for this suite.
Dec 29 12:28:20.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:28:21.027: INFO: namespace: e2e-tests-configmap-57sp4, resource: bindings, ignored listing per whitelist
Dec 29 12:28:21.128: INFO: namespace e2e-tests-configmap-57sp4 deletion completed in 6.332585959s

• [SLOW TEST:19.698 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:28:21.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:28:21.469: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 29 12:28:21.544: INFO: Number of nodes with available pods: 0
Dec 29 12:28:21.544: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 29 12:28:21.950: INFO: Number of nodes with available pods: 0
Dec 29 12:28:21.950: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:23.201: INFO: Number of nodes with available pods: 0
Dec 29 12:28:23.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:23.962: INFO: Number of nodes with available pods: 0
Dec 29 12:28:23.962: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:24.964: INFO: Number of nodes with available pods: 0
Dec 29 12:28:24.964: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:25.974: INFO: Number of nodes with available pods: 0
Dec 29 12:28:25.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:26.973: INFO: Number of nodes with available pods: 0
Dec 29 12:28:26.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:28.039: INFO: Number of nodes with available pods: 0
Dec 29 12:28:28.039: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:28.979: INFO: Number of nodes with available pods: 0
Dec 29 12:28:28.980: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:29.968: INFO: Number of nodes with available pods: 0
Dec 29 12:28:29.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:30.965: INFO: Number of nodes with available pods: 1
Dec 29 12:28:30.965: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 29 12:28:31.014: INFO: Number of nodes with available pods: 1
Dec 29 12:28:31.014: INFO: Number of running nodes: 0, number of available pods: 1
Dec 29 12:28:32.037: INFO: Number of nodes with available pods: 0
Dec 29 12:28:32.037: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 29 12:28:32.065: INFO: Number of nodes with available pods: 0
Dec 29 12:28:32.065: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:33.099: INFO: Number of nodes with available pods: 0
Dec 29 12:28:33.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:34.266: INFO: Number of nodes with available pods: 0
Dec 29 12:28:34.267: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:35.084: INFO: Number of nodes with available pods: 0
Dec 29 12:28:35.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:36.272: INFO: Number of nodes with available pods: 0
Dec 29 12:28:36.272: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:37.076: INFO: Number of nodes with available pods: 0
Dec 29 12:28:37.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:38.079: INFO: Number of nodes with available pods: 0
Dec 29 12:28:38.079: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:39.179: INFO: Number of nodes with available pods: 0
Dec 29 12:28:39.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:40.082: INFO: Number of nodes with available pods: 0
Dec 29 12:28:40.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:41.114: INFO: Number of nodes with available pods: 0
Dec 29 12:28:41.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:42.080: INFO: Number of nodes with available pods: 0
Dec 29 12:28:42.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:43.085: INFO: Number of nodes with available pods: 0
Dec 29 12:28:43.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:44.081: INFO: Number of nodes with available pods: 0
Dec 29 12:28:44.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:45.202: INFO: Number of nodes with available pods: 0
Dec 29 12:28:45.202: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:46.173: INFO: Number of nodes with available pods: 0
Dec 29 12:28:46.173: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:47.252: INFO: Number of nodes with available pods: 0
Dec 29 12:28:47.252: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:48.103: INFO: Number of nodes with available pods: 0
Dec 29 12:28:48.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:49.107: INFO: Number of nodes with available pods: 0
Dec 29 12:28:49.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 29 12:28:50.083: INFO: Number of nodes with available pods: 1
Dec 29 12:28:50.084: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-s4s5h, will wait for the garbage collector to delete the pods
Dec 29 12:28:50.185: INFO: Deleting DaemonSet.extensions daemon-set took: 29.985671ms
Dec 29 12:28:50.386: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.946138ms
Dec 29 12:29:03.107: INFO: Number of nodes with available pods: 0
Dec 29 12:29:03.107: INFO: Number of running nodes: 0, number of available pods: 0
Dec 29 12:29:03.112: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-s4s5h/daemonsets","resourceVersion":"16461738"},"items":null}

Dec 29 12:29:03.115: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-s4s5h/pods","resourceVersion":"16461738"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:29:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-s4s5h" for this suite.
Dec 29 12:29:09.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:29:09.254: INFO: namespace: e2e-tests-daemonsets-s4s5h, resource: bindings, ignored listing per whitelist
Dec 29 12:29:09.321: INFO: namespace e2e-tests-daemonsets-s4s5h deletion completed in 6.162903573s

• [SLOW TEST:48.193 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:29:09.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 29 12:29:09.528: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 29 12:29:09.548: INFO: Waiting for terminating namespaces to be deleted...
Dec 29 12:29:09.550: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 29 12:29:09.561: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:29:09.561: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 29 12:29:09.561: INFO: 	Container weave ready: true, restart count 0
Dec 29 12:29:09.561: INFO: 	Container weave-npc ready: true, restart count 0
Dec 29 12:29:09.561: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:29:09.561: INFO: 	Container coredns ready: true, restart count 0
Dec 29 12:29:09.561: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:29:09.561: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:29:09.561: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 29 12:29:09.561: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 29 12:29:09.561: INFO: 	Container coredns ready: true, restart count 0
Dec 29 12:29:09.562: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 29 12:29:09.562: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d74c42a0-2a36-11ea-9252-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d74c42a0-2a36-11ea-9252-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d74c42a0-2a36-11ea-9252-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:29:32.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-sr95x" for this suite.
Dec 29 12:29:56.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:29:56.261: INFO: namespace: e2e-tests-sched-pred-sr95x, resource: bindings, ignored listing per whitelist
Dec 29 12:29:56.283: INFO: namespace e2e-tests-sched-pred-sr95x deletion completed in 24.18126464s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:46.962 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:29:56.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-xzcz7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xzcz7 to expose endpoints map[]
Dec 29 12:29:56.587: INFO: Get endpoints failed (33.380509ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 29 12:29:57.614: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xzcz7 exposes endpoints map[] (1.060058664s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-xzcz7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xzcz7 to expose endpoints map[pod1:[80]]
Dec 29 12:30:02.818: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.144073975s elapsed, will retry)
Dec 29 12:30:09.282: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xzcz7 exposes endpoints map[pod1:[80]] (11.608155484s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-xzcz7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xzcz7 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 29 12:30:13.850: INFO: Unexpected endpoints: found map[ecb94852-2a36-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (4.545342693s elapsed, will retry)
Dec 29 12:30:19.288: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xzcz7 exposes endpoints map[pod1:[80] pod2:[80]] (9.982991685s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-xzcz7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xzcz7 to expose endpoints map[pod2:[80]]
Dec 29 12:30:20.709: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xzcz7 exposes endpoints map[pod2:[80]] (1.413034599s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-xzcz7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xzcz7 to expose endpoints map[]
Dec 29 12:30:21.811: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xzcz7 exposes endpoints map[] (1.083170782s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:30:23.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-xzcz7" for this suite.
Dec 29 12:30:47.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:30:47.430: INFO: namespace: e2e-tests-services-xzcz7, resource: bindings, ignored listing per whitelist
Dec 29 12:30:47.465: INFO: namespace e2e-tests-services-xzcz7 deletion completed in 24.39000542s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.181 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:30:47.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:30:47.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-sk9rw" to be "success or failure"
Dec 29 12:30:47.712: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.026929ms
Dec 29 12:30:49.731: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033850426s
Dec 29 12:30:51.812: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115456704s
Dec 29 12:30:54.279: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581726225s
Dec 29 12:30:56.296: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.598653195s
Dec 29 12:30:58.313: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.616348594s
STEP: Saw pod success
Dec 29 12:30:58.313: INFO: Pod "downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:30:58.320: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:30:58.660: INFO: Waiting for pod downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005 to disappear
Dec 29 12:30:58.789: INFO: Pod downwardapi-volume-0a8c0e89-2a37-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:30:58.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sk9rw" for this suite.
Dec 29 12:31:06.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:31:07.030: INFO: namespace: e2e-tests-projected-sk9rw, resource: bindings, ignored listing per whitelist
Dec 29 12:31:07.124: INFO: namespace e2e-tests-projected-sk9rw deletion completed in 8.320692313s

• [SLOW TEST:19.658 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:31:07.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 29 12:31:07.369: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix224174231/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:31:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rgzt7" for this suite.
Dec 29 12:31:13.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:31:13.730: INFO: namespace: e2e-tests-kubectl-rgzt7, resource: bindings, ignored listing per whitelist
Dec 29 12:31:13.806: INFO: namespace e2e-tests-kubectl-rgzt7 deletion completed in 6.316996785s

• [SLOW TEST:6.682 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:31:13.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-6dsr5
Dec 29 12:31:24.101: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-6dsr5
STEP: checking the pod's current state and verifying that restartCount is present
Dec 29 12:31:24.104: INFO: Initial restart count of pod liveness-http is 0
Dec 29 12:31:53.364: INFO: Restart count of pod e2e-tests-container-probe-6dsr5/liveness-http is now 1 (29.259955339s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:31:53.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6dsr5" for this suite.
Dec 29 12:31:59.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:31:59.611: INFO: namespace: e2e-tests-container-probe-6dsr5, resource: bindings, ignored listing per whitelist
Dec 29 12:31:59.853: INFO: namespace e2e-tests-container-probe-6dsr5 deletion completed in 6.341968645s

• [SLOW TEST:46.046 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:31:59.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-vs47
STEP: Creating a pod to test atomic-volume-subpath
Dec 29 12:32:00.079: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vs47" in namespace "e2e-tests-subpath-lvxzs" to be "success or failure"
Dec 29 12:32:00.112: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 33.042048ms
Dec 29 12:32:02.125: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046331982s
Dec 29 12:32:04.151: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072396306s
Dec 29 12:32:06.179: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100290457s
Dec 29 12:32:08.200: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121237645s
Dec 29 12:32:10.577: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 10.498203518s
Dec 29 12:32:12.677: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 12.598380325s
Dec 29 12:32:14.708: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Pending", Reason="", readiness=false. Elapsed: 14.628989925s
Dec 29 12:32:16.737: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 16.65809222s
Dec 29 12:32:18.759: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 18.679898959s
Dec 29 12:32:20.778: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 20.699042678s
Dec 29 12:32:22.796: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 22.716686819s
Dec 29 12:32:24.811: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 24.731668747s
Dec 29 12:32:26.830: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 26.750915436s
Dec 29 12:32:28.844: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 28.765145361s
Dec 29 12:32:30.868: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 30.789385344s
Dec 29 12:32:32.918: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Running", Reason="", readiness=false. Elapsed: 32.838663622s
Dec 29 12:32:35.682: INFO: Pod "pod-subpath-test-secret-vs47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.603363465s
STEP: Saw pod success
Dec 29 12:32:35.683: INFO: Pod "pod-subpath-test-secret-vs47" satisfied condition "success or failure"
Dec 29 12:32:35.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-vs47 container test-container-subpath-secret-vs47: 
STEP: delete the pod
Dec 29 12:32:35.959: INFO: Waiting for pod pod-subpath-test-secret-vs47 to disappear
Dec 29 12:32:35.970: INFO: Pod pod-subpath-test-secret-vs47 no longer exists
STEP: Deleting pod pod-subpath-test-secret-vs47
Dec 29 12:32:35.970: INFO: Deleting pod "pod-subpath-test-secret-vs47" in namespace "e2e-tests-subpath-lvxzs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:32:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-lvxzs" for this suite.
Dec 29 12:32:44.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:32:44.232: INFO: namespace: e2e-tests-subpath-lvxzs, resource: bindings, ignored listing per whitelist
Dec 29 12:32:44.378: INFO: namespace e2e-tests-subpath-lvxzs deletion completed in 8.39156454s

• [SLOW TEST:44.525 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:32:44.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-k85nc/secret-test-503c0c11-2a37-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:32:44.660: INFO: Waiting up to 5m0s for pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-k85nc" to be "success or failure"
Dec 29 12:32:44.675: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.056178ms
Dec 29 12:32:46.696: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036105033s
Dec 29 12:32:48.756: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096266093s
Dec 29 12:32:51.052: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392266711s
Dec 29 12:32:53.071: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410739289s
Dec 29 12:32:55.091: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.430714021s
Dec 29 12:32:57.103: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.442701933s
STEP: Saw pod success
Dec 29 12:32:57.103: INFO: Pod "pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:32:57.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005 container env-test: 
STEP: delete the pod
Dec 29 12:32:57.938: INFO: Waiting for pod pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005 to disappear
Dec 29 12:32:58.224: INFO: Pod pod-configmaps-504416ec-2a37-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:32:58.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k85nc" for this suite.
Dec 29 12:33:06.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:33:06.645: INFO: namespace: e2e-tests-secrets-k85nc, resource: bindings, ignored listing per whitelist
Dec 29 12:33:06.690: INFO: namespace e2e-tests-secrets-k85nc deletion completed in 8.451781815s

• [SLOW TEST:22.311 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:33:06.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 29 12:33:06.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:08.731: INFO: stderr: ""
Dec 29 12:33:08.731: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 29 12:33:08.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:08.984: INFO: stderr: ""
Dec 29 12:33:08.984: INFO: stdout: "update-demo-nautilus-j2j56 update-demo-nautilus-j5bqb "
Dec 29 12:33:08.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2j56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:09.170: INFO: stderr: ""
Dec 29 12:33:09.170: INFO: stdout: ""
Dec 29 12:33:09.171: INFO: update-demo-nautilus-j2j56 is created but not running
Dec 29 12:33:14.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:14.286: INFO: stderr: ""
Dec 29 12:33:14.286: INFO: stdout: "update-demo-nautilus-j2j56 update-demo-nautilus-j5bqb "
Dec 29 12:33:14.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2j56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:14.424: INFO: stderr: ""
Dec 29 12:33:14.425: INFO: stdout: ""
Dec 29 12:33:14.425: INFO: update-demo-nautilus-j2j56 is created but not running
Dec 29 12:33:19.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:19.608: INFO: stderr: ""
Dec 29 12:33:19.609: INFO: stdout: "update-demo-nautilus-j2j56 update-demo-nautilus-j5bqb "
Dec 29 12:33:19.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2j56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:19.807: INFO: stderr: ""
Dec 29 12:33:19.808: INFO: stdout: ""
Dec 29 12:33:19.808: INFO: update-demo-nautilus-j2j56 is created but not running
Dec 29 12:33:24.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:24.950: INFO: stderr: ""
Dec 29 12:33:24.951: INFO: stdout: "update-demo-nautilus-j2j56 update-demo-nautilus-j5bqb "
Dec 29 12:33:24.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2j56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.078: INFO: stderr: ""
Dec 29 12:33:25.078: INFO: stdout: "true"
Dec 29 12:33:25.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2j56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.172: INFO: stderr: ""
Dec 29 12:33:25.172: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:33:25.173: INFO: validating pod update-demo-nautilus-j2j56
Dec 29 12:33:25.190: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:33:25.190: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:33:25.190: INFO: update-demo-nautilus-j2j56 is verified up and running
Dec 29 12:33:25.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5bqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.307: INFO: stderr: ""
Dec 29 12:33:25.307: INFO: stdout: "true"
Dec 29 12:33:25.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5bqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.398: INFO: stderr: ""
Dec 29 12:33:25.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 29 12:33:25.398: INFO: validating pod update-demo-nautilus-j5bqb
Dec 29 12:33:25.409: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 29 12:33:25.409: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 29 12:33:25.410: INFO: update-demo-nautilus-j5bqb is verified up and running
STEP: using delete to clean up resources
Dec 29 12:33:25.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 12:33:25.532: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 29 12:33:25.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kk4dr'
Dec 29 12:33:25.657: INFO: stderr: "No resources found.\n"
Dec 29 12:33:25.657: INFO: stdout: ""
Dec 29 12:33:25.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kk4dr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 29 12:33:25.833: INFO: stderr: ""
Dec 29 12:33:25.833: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:33:25.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kk4dr" for this suite.
Dec 29 12:33:49.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:33:50.073: INFO: namespace: e2e-tests-kubectl-kk4dr, resource: bindings, ignored listing per whitelist
Dec 29 12:33:50.151: INFO: namespace e2e-tests-kubectl-kk4dr deletion completed in 24.282502106s

• [SLOW TEST:43.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:33:50.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-25k42
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 29 12:33:50.467: INFO: Found 0 stateful pods, waiting for 3
Dec 29 12:34:00.517: INFO: Found 1 stateful pods, waiting for 3
Dec 29 12:34:10.545: INFO: Found 2 stateful pods, waiting for 3
Dec 29 12:34:20.563: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 12:34:20.563: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 12:34:20.563: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 29 12:34:30.522: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 12:34:30.523: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 12:34:30.523: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 29 12:34:30.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-25k42 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 12:34:31.619: INFO: stderr: ""
Dec 29 12:34:31.619: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 12:34:31.619: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 29 12:34:41.707: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 29 12:34:51.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-25k42 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:34:52.570: INFO: stderr: ""
Dec 29 12:34:52.571: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 12:34:52.571: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 12:34:52.748: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:34:52.749: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:34:52.749: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:34:52.749: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:02.811: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:35:02.811: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:02.811: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:12.763: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:35:12.764: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:12.764: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:22.853: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:35:22.854: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:32.781: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:35:32.781: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 29 12:35:42.915: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 29 12:35:52.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-25k42 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 29 12:35:53.656: INFO: stderr: ""
Dec 29 12:35:53.656: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 29 12:35:53.656: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 29 12:36:03.788: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 29 12:36:13.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-25k42 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 29 12:36:14.670: INFO: stderr: ""
Dec 29 12:36:14.670: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 29 12:36:14.670: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 29 12:36:24.800: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:36:24.800: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:36:24.800: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:36:34.830: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:36:34.830: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:36:34.830: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:36:45.552: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:36:45.552: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:36:54.827: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
Dec 29 12:36:54.827: INFO: Waiting for Pod e2e-tests-statefulset-25k42/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 29 12:37:04.894: INFO: Waiting for StatefulSet e2e-tests-statefulset-25k42/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 29 12:37:14.846: INFO: Deleting all statefulset in ns e2e-tests-statefulset-25k42
Dec 29 12:37:14.869: INFO: Scaling statefulset ss2 to 0
Dec 29 12:37:34.931: INFO: Waiting for statefulset status.replicas updated to 0
Dec 29 12:37:34.936: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:37:35.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-25k42" for this suite.
Dec 29 12:37:43.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:37:43.114: INFO: namespace: e2e-tests-statefulset-25k42, resource: bindings, ignored listing per whitelist
Dec 29 12:37:43.331: INFO: namespace e2e-tests-statefulset-25k42 deletion completed in 8.287412682s

• [SLOW TEST:233.179 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:37:43.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j9rg7
Dec 29 12:37:53.667: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j9rg7
STEP: checking the pod's current state and verifying that restartCount is present
Dec 29 12:37:53.681: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:41:55.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j9rg7" for this suite.
Dec 29 12:42:03.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:42:03.700: INFO: namespace: e2e-tests-container-probe-j9rg7, resource: bindings, ignored listing per whitelist
Dec 29 12:42:03.833: INFO: namespace e2e-tests-container-probe-j9rg7 deletion completed in 8.325342191s

• [SLOW TEST:260.502 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:42:03.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 29 12:42:11.898: INFO: 10 pods remaining
Dec 29 12:42:11.898: INFO: 10 pods has nil DeletionTimestamp
Dec 29 12:42:11.898: INFO: 
Dec 29 12:42:13.784: INFO: 10 pods remaining
Dec 29 12:42:13.785: INFO: 9 pods has nil DeletionTimestamp
Dec 29 12:42:13.785: INFO: 
Dec 29 12:42:15.049: INFO: 7 pods remaining
Dec 29 12:42:15.050: INFO: 0 pods has nil DeletionTimestamp
Dec 29 12:42:15.050: INFO: 
STEP: Gathering metrics
W1229 12:42:15.412223       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 12:42:15.412: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:42:15.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-99w2z" for this suite.
Dec 29 12:42:31.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:42:31.831: INFO: namespace: e2e-tests-gc-99w2z, resource: bindings, ignored listing per whitelist
Dec 29 12:42:31.909: INFO: namespace e2e-tests-gc-99w2z deletion completed in 16.489688941s

• [SLOW TEST:28.075 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:42:31.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:42:32.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-vr8mk" to be "success or failure"
Dec 29 12:42:32.259: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.065613ms
Dec 29 12:42:34.273: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125184076s
Dec 29 12:42:36.283: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13517052s
Dec 29 12:42:38.689: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541604796s
Dec 29 12:42:41.108: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960394269s
Dec 29 12:42:43.133: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.985495534s
Dec 29 12:42:45.150: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.002158377s
STEP: Saw pod success
Dec 29 12:42:45.150: INFO: Pod "downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:42:45.157: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:42:45.753: INFO: Waiting for pod downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005 to disappear
Dec 29 12:42:45.772: INFO: Pod downwardapi-volume-ae69d747-2a38-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:42:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vr8mk" for this suite.
Dec 29 12:42:51.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:42:52.013: INFO: namespace: e2e-tests-downward-api-vr8mk, resource: bindings, ignored listing per whitelist
Dec 29 12:42:52.015: INFO: namespace e2e-tests-downward-api-vr8mk deletion completed in 6.225278234s

• [SLOW TEST:20.105 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:42:52.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:42:52.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-79hfz" to be "success or failure"
Dec 29 12:42:52.459: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 150.223372ms
Dec 29 12:42:54.840: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531217368s
Dec 29 12:42:56.886: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576746439s
Dec 29 12:42:58.915: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606088999s
Dec 29 12:43:01.003: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693929791s
Dec 29 12:43:03.106: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.796568389s
STEP: Saw pod success
Dec 29 12:43:03.106: INFO: Pod "downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:43:03.132: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:43:03.369: INFO: Waiting for pod downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005 to disappear
Dec 29 12:43:03.377: INFO: Pod downwardapi-volume-ba6f646e-2a38-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:43:03.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-79hfz" for this suite.
Dec 29 12:43:09.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:43:09.559: INFO: namespace: e2e-tests-downward-api-79hfz, resource: bindings, ignored listing per whitelist
Dec 29 12:43:09.594: INFO: namespace e2e-tests-downward-api-79hfz deletion completed in 6.203447021s

• [SLOW TEST:17.579 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:43:09.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 12:43:09.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9bm9k'
Dec 29 12:43:12.050: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 29 12:43:12.050: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 29 12:43:14.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9bm9k'
Dec 29 12:43:14.930: INFO: stderr: ""
Dec 29 12:43:14.931: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:43:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9bm9k" for this suite.
Dec 29 12:43:21.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:43:21.652: INFO: namespace: e2e-tests-kubectl-9bm9k, resource: bindings, ignored listing per whitelist
Dec 29 12:43:21.666: INFO: namespace e2e-tests-kubectl-9bm9k deletion completed in 6.297613551s

• [SLOW TEST:12.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:43:21.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1229 12:43:32.002350       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 12:43:32.002: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:43:32.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-v9w4w" for this suite.
Dec 29 12:43:38.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:43:38.399: INFO: namespace: e2e-tests-gc-v9w4w, resource: bindings, ignored listing per whitelist
Dec 29 12:43:38.420: INFO: namespace e2e-tests-gc-v9w4w deletion completed in 6.412560318s

• [SLOW TEST:16.754 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:43:38.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:43:38.677: INFO: Creating deployment "nginx-deployment"
Dec 29 12:43:38.691: INFO: Waiting for observed generation 1
Dec 29 12:43:42.093: INFO: Waiting for all required pods to come up
Dec 29 12:43:42.698: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 29 12:44:22.992: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 29 12:44:23.002: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 29 12:44:23.016: INFO: Updating deployment nginx-deployment
Dec 29 12:44:23.016: INFO: Waiting for observed generation 2
Dec 29 12:44:25.975: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 29 12:44:26.292: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 29 12:44:27.092: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 29 12:44:28.013: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 29 12:44:28.013: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 29 12:44:28.029: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 29 12:44:28.146: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 29 12:44:28.146: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 29 12:44:29.209: INFO: Updating deployment nginx-deployment
Dec 29 12:44:29.209: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 29 12:44:30.061: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 29 12:44:30.199: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 29 12:44:34.613: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-mkskq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mkskq/deployments/nginx-deployment,UID:d61b6ac2-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463958,Generation:3,CreationTimestamp:2019-12-29 12:43:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-29 12:44:30 +0000 UTC 2019-12-29 12:44:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-29 12:44:33 +0000 UTC 2019-12-29 12:43:38 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 29 12:44:34.650: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-mkskq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mkskq/replicasets/nginx-deployment-5c98f8fb5,UID:f089c91c-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463953,Generation:3,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d61b6ac2-2a38-11ea-a994-fa163e34d433 0xc0024e3c17 0xc0024e3c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 12:44:34.650: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 29 12:44:34.651: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-mkskq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mkskq/replicasets/nginx-deployment-85ddf47c5d,UID:d620a50e-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463932,Generation:3,CreationTimestamp:2019-12-29 12:43:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d61b6ac2-2a38-11ea-a994-fa163e34d433 0xc0024e3d17 0xc0024e3d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 29 12:44:35.242: INFO: Pod "nginx-deployment-5c98f8fb5-224x8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-224x8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-224x8,UID:f0e382f7-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463881,Generation:0,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215eb07 0xc00215eb08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215eb70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215eb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.242: INFO: Pod "nginx-deployment-5c98f8fb5-29kds" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-29kds,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-29kds,UID:f649dc6f-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463951,Generation:0,CreationTimestamp:2019-12-29 12:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215ec57 0xc00215ec58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215ecc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215ece0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.243: INFO: Pod "nginx-deployment-5c98f8fb5-2s9dx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2s9dx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-2s9dx,UID:f60e44e7-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463946,Generation:0,CreationTimestamp:2019-12-29 12:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215ed57 0xc00215ed58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215edc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215ede0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.243: INFO: Pod "nginx-deployment-5c98f8fb5-457fs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-457fs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-457fs,UID:f60e018d-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463947,Generation:0,CreationTimestamp:2019-12-29 12:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215ee57 0xc00215ee58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215eec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215eee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.243: INFO: Pod "nginx-deployment-5c98f8fb5-9n6w9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9n6w9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-9n6w9,UID:f0976416-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463865,Generation:0,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215ef57 0xc00215ef58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215efc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215efe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.244: INFO: Pod "nginx-deployment-5c98f8fb5-ccwfh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ccwfh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-ccwfh,UID:f096655d-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463878,Generation:0,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f0a7 0xc00215f0a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.244: INFO: Pod "nginx-deployment-5c98f8fb5-dm7bp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dm7bp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-dm7bp,UID:f52be96c-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463933,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f1f7 0xc00215f1f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.244: INFO: Pod "nginx-deployment-5c98f8fb5-nc2n2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nc2n2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-nc2n2,UID:f60dec1f-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463944,Generation:0,CreationTimestamp:2019-12-29 12:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f2f7 0xc00215f2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.245: INFO: Pod "nginx-deployment-5c98f8fb5-pb5m4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pb5m4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-pb5m4,UID:f0dd5a11-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463879,Generation:0,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f3f7 0xc00215f3f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.245: INFO: Pod "nginx-deployment-5c98f8fb5-ps529" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ps529,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-ps529,UID:f60dc5de-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463942,Generation:0,CreationTimestamp:2019-12-29 12:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f547 0xc00215f548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f5c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.245: INFO: Pod "nginx-deployment-5c98f8fb5-w6pdd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w6pdd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-w6pdd,UID:f08db6e4-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463853,Generation:0,CreationTimestamp:2019-12-29 12:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f687 0xc00215f688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.246: INFO: Pod "nginx-deployment-5c98f8fb5-wdqtr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wdqtr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-wdqtr,UID:f5000cb5-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463928,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f827 0xc00215f828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f8a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.246: INFO: Pod "nginx-deployment-5c98f8fb5-z49jl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z49jl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-5c98f8fb5-z49jl,UID:f52c6c91-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463930,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f089c91c-2a38-11ea-a994-fa163e34d433 0xc00215f937 0xc00215f938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215f9a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215f9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.246: INFO: Pod "nginx-deployment-85ddf47c5d-47p7d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-47p7d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-47p7d,UID:f4d1a79d-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463943,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc00215fa37 0xc00215fa38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215faa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215fac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.247: INFO: Pod "nginx-deployment-85ddf47c5d-8t92x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8t92x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-8t92x,UID:f4de7f00-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463907,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc00215fc27 0xc00215fc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215fc90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215fcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.247: INFO: Pod "nginx-deployment-85ddf47c5d-c2phj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c2phj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-c2phj,UID:f4be9e63-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463929,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc00215fd57 0xc00215fd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215fe00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215fe20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.248: INFO: Pod "nginx-deployment-85ddf47c5d-h45f6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h45f6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-h45f6,UID:d64bdf34-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463824,Generation:0,CreationTimestamp:2019-12-29 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc00215fed7 0xc00215fed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215ff40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215ff60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11955b4a9e4e7fc4a31f4745ecbccff6bd6ebf0502da2686bc5400125a64b058}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.248: INFO: Pod "nginx-deployment-85ddf47c5d-hjxgk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hjxgk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-hjxgk,UID:d6362c48-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463782,Generation:0,CreationTimestamp:2019-12-29 12:43:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0018142d7 0xc0018142d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001814340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001814360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e21e85096968fe32b0a5a337ded9a099b1ca504c281a4b3b295d51f766f25898}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.248: INFO: Pod "nginx-deployment-85ddf47c5d-jpktr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jpktr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-jpktr,UID:d6569584-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463788,Generation:0,CreationTimestamp:2019-12-29 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0018146a7 0xc0018146a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001814710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001814740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-29 12:43:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://32408d9656cf475024f3a053cbbfa9d0da4c524e7cb7cf293b491d02b094b101}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.248: INFO: Pod "nginx-deployment-85ddf47c5d-lhb4j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lhb4j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-lhb4j,UID:d64b657e-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463820,Generation:0,CreationTimestamp:2019-12-29 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc001814e37 0xc001814e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001815530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001815650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://77dde89ee67b3695f37c357b33b37cf17db833346fb683d7a2680d4b93b65658}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.249: INFO: Pod "nginx-deployment-85ddf47c5d-lhvrf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lhvrf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-lhvrf,UID:f5047c17-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463919,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc001815717 0xc001815718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001815b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001815ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.249: INFO: Pod "nginx-deployment-85ddf47c5d-ls66k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ls66k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-ls66k,UID:f5043ddf-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463921,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc001815d77 0xc001815d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001815de0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001815fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.249: INFO: Pod "nginx-deployment-85ddf47c5d-mxgjc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mxgjc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-mxgjc,UID:f4d1c422-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463963,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6207 0xc0022b6208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-29 12:44:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.250: INFO: Pod "nginx-deployment-85ddf47c5d-ntcqm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntcqm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-ntcqm,UID:f4de3214-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463915,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6347 0xc0022b6348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.250: INFO: Pod "nginx-deployment-85ddf47c5d-p65w2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p65w2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-p65w2,UID:d64d9da6-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463810,Generation:0,CreationTimestamp:2019-12-29 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b64e7 0xc0022b64e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fccb62193c9b4daa775e43b27d553a2d31c6ea36aa5df2b36eadc8d8c7f83e89}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.251: INFO: Pod "nginx-deployment-85ddf47c5d-q4hxj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q4hxj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-q4hxj,UID:d64bc908-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463775,Generation:0,CreationTimestamp:2019-12-29 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6777 0xc0022b6778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b67e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e9df56818f670b61b2d4368e24a0a507042ba6e167cb26f48e70ae0cadcc5602}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.251: INFO: Pod "nginx-deployment-85ddf47c5d-qh895" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qh895,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-qh895,UID:f4df4829-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463911,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6977 0xc0022b6978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.251: INFO: Pod "nginx-deployment-85ddf47c5d-qwrxq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qwrxq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-qwrxq,UID:f4de60ea-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463910,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6b27 0xc0022b6b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.252: INFO: Pod "nginx-deployment-85ddf47c5d-sg6lv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sg6lv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-sg6lv,UID:d62fb017-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463763,Generation:0,CreationTimestamp:2019-12-29 12:43:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b6c27 0xc0022b6c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b6e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b6e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-29 12:43:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e4edd7f18878af4c54850dcdcc2a17af06d147dd0bdbfa20504ed5c0b76c893c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.252: INFO: Pod "nginx-deployment-85ddf47c5d-vlx2b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vlx2b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-vlx2b,UID:f504893e-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463922,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b72c7 0xc0022b72c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b73f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b7410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.252: INFO: Pod "nginx-deployment-85ddf47c5d-wd6c5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wd6c5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-wd6c5,UID:f5050e44-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463923,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b7487 0xc0022b7488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b74f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b7510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.252: INFO: Pod "nginx-deployment-85ddf47c5d-x4m77" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x4m77,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-x4m77,UID:d635fbcb-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463806,Generation:0,CreationTimestamp:2019-12-29 12:43:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b7647 0xc0022b7648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b76b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b76d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:43:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-29 12:43:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-29 12:44:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1c465743b0cdb48c3caca94c51fde9dd08102abb936e84f8e7ff08010ebe5822}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 29 12:44:35.253: INFO: Pod "nginx-deployment-85ddf47c5d-zz7qt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zz7qt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mkskq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mkskq/pods/nginx-deployment-85ddf47c5d-zz7qt,UID:f505105f-2a38-11ea-a994-fa163e34d433,ResourceVersion:16463924,Generation:0,CreationTimestamp:2019-12-29 12:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d620a50e-2a38-11ea-a994-fa163e34d433 0xc0022b7867 0xc0022b7868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8222p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8222p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8222p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b78d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b78f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:44:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:44:35.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-mkskq" for this suite.
Dec 29 12:45:49.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:45:49.769: INFO: namespace: e2e-tests-deployment-mkskq, resource: bindings, ignored listing per whitelist
Dec 29 12:45:49.875: INFO: namespace e2e-tests-deployment-mkskq deletion completed in 1m13.453317936s

• [SLOW TEST:131.454 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:45:49.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 29 12:45:50.285: INFO: Waiting up to 5m0s for pod "client-containers-24795619-2a39-11ea-9252-0242ac110005" in namespace "e2e-tests-containers-zftdr" to be "success or failure"
Dec 29 12:45:50.306: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.640305ms
Dec 29 12:45:52.327: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040874356s
Dec 29 12:45:54.343: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057664119s
Dec 29 12:45:56.450: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164286479s
Dec 29 12:45:58.468: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182289989s
Dec 29 12:46:00.739: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.453054511s
Dec 29 12:46:02.836: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.550689268s
Dec 29 12:46:05.278: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.99255777s
Dec 29 12:46:07.301: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.015760521s
Dec 29 12:46:09.316: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.030635551s
Dec 29 12:46:11.585: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.299325428s
Dec 29 12:46:13.619: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.333600484s
Dec 29 12:46:16.073: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.787680396s
Dec 29 12:46:18.143: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.857512781s
Dec 29 12:46:20.165: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.878977067s
Dec 29 12:46:22.223: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.937479808s
STEP: Saw pod success
Dec 29 12:46:22.223: INFO: Pod "client-containers-24795619-2a39-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:46:22.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-24795619-2a39-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:46:22.364: INFO: Waiting for pod client-containers-24795619-2a39-11ea-9252-0242ac110005 to disappear
Dec 29 12:46:22.378: INFO: Pod client-containers-24795619-2a39-11ea-9252-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:46:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zftdr" for this suite.
Dec 29 12:46:28.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:46:28.600: INFO: namespace: e2e-tests-containers-zftdr, resource: bindings, ignored listing per whitelist
Dec 29 12:46:28.870: INFO: namespace e2e-tests-containers-zftdr deletion completed in 6.485492358s

• [SLOW TEST:38.994 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:46:28.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:46:29.284: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-ldllm" to be "success or failure"
Dec 29 12:46:29.353: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.538797ms
Dec 29 12:46:31.532: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247892582s
Dec 29 12:46:33.567: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282932089s
Dec 29 12:46:35.596: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311679296s
Dec 29 12:46:38.033: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748897939s
Dec 29 12:46:40.059: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.774395992s
Dec 29 12:46:42.081: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.797086537s
Dec 29 12:46:44.106: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.82133175s
STEP: Saw pod success
Dec 29 12:46:44.106: INFO: Pod "downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:46:44.117: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:46:44.402: INFO: Waiting for pod downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005 to disappear
Dec 29 12:46:44.424: INFO: Pod downwardapi-volume-3bbf3322-2a39-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:46:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ldllm" for this suite.
Dec 29 12:46:50.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:46:50.666: INFO: namespace: e2e-tests-downward-api-ldllm, resource: bindings, ignored listing per whitelist
Dec 29 12:46:50.868: INFO: namespace e2e-tests-downward-api-ldllm deletion completed in 6.432146552s

• [SLOW TEST:21.997 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:46:50.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 29 12:47:01.221: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-48c691f3-2a39-11ea-9252-0242ac110005,GenerateName:,Namespace:e2e-tests-events-z4bqk,SelfLink:/api/v1/namespaces/e2e-tests-events-z4bqk/pods/send-events-48c691f3-2a39-11ea-9252-0242ac110005,UID:48c8475b-2a39-11ea-a994-fa163e34d433,ResourceVersion:16464358,Generation:0,CreationTimestamp:2019-12-29 12:46:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 69702323,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c6bl2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c6bl2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c6bl2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017f49d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017f49f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:46:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:47:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:47:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 12:46:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-29 12:46:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-29 12:46:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://e7836cc8935699bd9d04f729ae39a29eff9d8d9928a360a0b3a2bcca5cd1e4fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 29 12:47:03.241: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 29 12:47:05.273: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:47:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-z4bqk" for this suite.
Dec 29 12:47:43.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:47:43.684: INFO: namespace: e2e-tests-events-z4bqk, resource: bindings, ignored listing per whitelist
Dec 29 12:47:43.791: INFO: namespace e2e-tests-events-z4bqk deletion completed in 38.482370773s

• [SLOW TEST:52.923 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:47:43.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fg8x7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fg8x7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 29 12:47:58.258: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.266: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.280: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.296: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.306: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.313: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.321: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.330: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.337: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.346: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.353: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.363: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.378: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.404: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.416: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.421: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.425: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.429: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.434: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.437: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005: the server could not find the requested resource (get pods dns-test-685a9aca-2a39-11ea-9252-0242ac110005)
Dec 29 12:47:58.437: INFO: Lookups using e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fg8x7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 29 12:48:03.797: INFO: DNS probes using e2e-tests-dns-fg8x7/dns-test-685a9aca-2a39-11ea-9252-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:48:03.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-fg8x7" for this suite.
Dec 29 12:48:12.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:48:12.393: INFO: namespace: e2e-tests-dns-fg8x7, resource: bindings, ignored listing per whitelist
Dec 29 12:48:12.400: INFO: namespace e2e-tests-dns-fg8x7 deletion completed in 8.461320982s

• [SLOW TEST:28.609 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:48:12.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-b55s4
Dec 29 12:48:24.804: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-b55s4
STEP: checking the pod's current state and verifying that restartCount is present
Dec 29 12:48:24.815: INFO: Initial restart count of pod liveness-http is 0
Dec 29 12:48:39.087: INFO: Restart count of pod e2e-tests-container-probe-b55s4/liveness-http is now 1 (14.271471002s elapsed)
Dec 29 12:48:59.283: INFO: Restart count of pod e2e-tests-container-probe-b55s4/liveness-http is now 2 (34.467922246s elapsed)
Dec 29 12:49:18.881: INFO: Restart count of pod e2e-tests-container-probe-b55s4/liveness-http is now 3 (54.066075271s elapsed)
Dec 29 12:49:39.310: INFO: Restart count of pod e2e-tests-container-probe-b55s4/liveness-http is now 4 (1m14.494690463s elapsed)
Dec 29 12:50:48.576: INFO: Restart count of pod e2e-tests-container-probe-b55s4/liveness-http is now 5 (2m23.760462464s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:50:49.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-b55s4" for this suite.
Dec 29 12:50:56.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:50:56.674: INFO: namespace: e2e-tests-container-probe-b55s4, resource: bindings, ignored listing per whitelist
Dec 29 12:50:56.691: INFO: namespace e2e-tests-container-probe-b55s4 deletion completed in 6.349372119s

• [SLOW TEST:164.291 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:50:56.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 29 12:50:56.888: INFO: Waiting up to 5m0s for pod "pod-db44e472-2a39-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-pmtkr" to be "success or failure"
Dec 29 12:50:56.897: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.661279ms
Dec 29 12:50:59.158: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269665293s
Dec 29 12:51:01.197: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308847642s
Dec 29 12:51:03.341: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452744272s
Dec 29 12:51:05.385: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.496484697s
Dec 29 12:51:07.433: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.544916361s
Dec 29 12:51:09.733: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.844537125s
STEP: Saw pod success
Dec 29 12:51:09.733: INFO: Pod "pod-db44e472-2a39-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:51:09.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db44e472-2a39-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:51:10.152: INFO: Waiting for pod pod-db44e472-2a39-11ea-9252-0242ac110005 to disappear
Dec 29 12:51:10.236: INFO: Pod pod-db44e472-2a39-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:51:10.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pmtkr" for this suite.
Dec 29 12:51:18.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:51:18.388: INFO: namespace: e2e-tests-emptydir-pmtkr, resource: bindings, ignored listing per whitelist
Dec 29 12:51:18.481: INFO: namespace e2e-tests-emptydir-pmtkr deletion completed in 8.234112887s

• [SLOW TEST:21.790 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:51:18.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 29 12:51:39.019: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:39.092: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:41.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:41.636: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:43.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:43.102: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:45.093: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:45.109: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:47.093: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:47.114: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:49.093: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:49.106: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:51.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:51.106: INFO: Pod pod-with-poststart-http-hook still exists
Dec 29 12:51:53.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 29 12:51:53.110: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:51:53.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qggxd" for this suite.
Dec 29 12:52:17.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:52:17.282: INFO: namespace: e2e-tests-container-lifecycle-hook-qggxd, resource: bindings, ignored listing per whitelist
Dec 29 12:52:17.350: INFO: namespace e2e-tests-container-lifecycle-hook-qggxd deletion completed in 24.230374126s

• [SLOW TEST:58.868 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:52:17.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:52:17.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-vmfbs" to be "success or failure"
Dec 29 12:52:17.745: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.483035ms
Dec 29 12:52:19.762: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024660161s
Dec 29 12:52:21.801: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062987087s
Dec 29 12:52:23.816: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078495652s
Dec 29 12:52:25.863: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124806318s
Dec 29 12:52:27.879: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.141700518s
Dec 29 12:52:29.905: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.167436546s
STEP: Saw pod success
Dec 29 12:52:29.905: INFO: Pod "downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:52:29.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:52:30.109: INFO: Waiting for pod downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005 to disappear
Dec 29 12:52:30.115: INFO: Pod downwardapi-volume-0b72db5f-2a3a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:52:30.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vmfbs" for this suite.
Dec 29 12:52:36.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:52:36.231: INFO: namespace: e2e-tests-projected-vmfbs, resource: bindings, ignored listing per whitelist
Dec 29 12:52:36.381: INFO: namespace e2e-tests-projected-vmfbs deletion completed in 6.254899211s

• [SLOW TEST:19.031 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:52:36.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 29 12:52:36.745: INFO: Waiting up to 5m0s for pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-ptm89" to be "success or failure"
Dec 29 12:52:36.762: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.051737ms
Dec 29 12:52:38.776: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030934045s
Dec 29 12:52:40.799: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05423609s
Dec 29 12:52:42.973: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228468994s
Dec 29 12:52:44.996: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251387723s
Dec 29 12:52:47.091: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346127866s
STEP: Saw pod success
Dec 29 12:52:47.091: INFO: Pod "pod-16cbd40f-2a3a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:52:47.149: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-16cbd40f-2a3a-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 12:52:47.510: INFO: Waiting for pod pod-16cbd40f-2a3a-11ea-9252-0242ac110005 to disappear
Dec 29 12:52:47.533: INFO: Pod pod-16cbd40f-2a3a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:52:47.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ptm89" for this suite.
Dec 29 12:52:53.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:52:54.021: INFO: namespace: e2e-tests-emptydir-ptm89, resource: bindings, ignored listing per whitelist
Dec 29 12:52:54.065: INFO: namespace e2e-tests-emptydir-ptm89 deletion completed in 6.51792008s

• [SLOW TEST:17.684 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:52:54.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vpfgw
Dec 29 12:53:04.615: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vpfgw
STEP: checking the pod's current state and verifying that restartCount is present
Dec 29 12:53:04.619: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:57:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vpfgw" for this suite.
Dec 29 12:57:13.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:57:14.045: INFO: namespace: e2e-tests-container-probe-vpfgw, resource: bindings, ignored listing per whitelist
Dec 29 12:57:14.266: INFO: namespace e2e-tests-container-probe-vpfgw deletion completed in 8.530520599s

• [SLOW TEST:260.200 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:57:14.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-bc672f73-2a3a-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:57:14.673: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-v7nvw" to be "success or failure"
Dec 29 12:57:14.691: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.240442ms
Dec 29 12:57:16.709: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035786348s
Dec 29 12:57:18.728: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055075017s
Dec 29 12:57:20.756: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083619015s
Dec 29 12:57:23.559: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.886466751s
Dec 29 12:57:25.581: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.908448588s
Dec 29 12:57:27.615: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.942437992s
STEP: Saw pod success
Dec 29 12:57:27.615: INFO: Pod "pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:57:27.627: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 12:57:27.858: INFO: Waiting for pod pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005 to disappear
Dec 29 12:57:27.867: INFO: Pod pod-secrets-bc6b3297-2a3a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:57:27.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-v7nvw" for this suite.
Dec 29 12:57:33.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:57:34.092: INFO: namespace: e2e-tests-secrets-v7nvw, resource: bindings, ignored listing per whitelist
Dec 29 12:57:34.095: INFO: namespace e2e-tests-secrets-v7nvw deletion completed in 6.220355939s

• [SLOW TEST:19.829 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:57:34.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 12:57:34.282: INFO: Creating ReplicaSet my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005
Dec 29 12:57:34.304: INFO: Pod name my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005: Found 0 pods out of 1
Dec 29 12:57:40.721: INFO: Pod name my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005: Found 1 pods out of 1
Dec 29 12:57:40.721: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005" is running
Dec 29 12:57:45.302: INFO: Pod "my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005-kf568" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 12:57:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 12:57:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 12:57:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-29 12:57:34 +0000 UTC Reason: Message:}])
Dec 29 12:57:45.302: INFO: Trying to dial the pod
Dec 29 12:57:50.353: INFO: Controller my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005: Got expected result from replica 1 [my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005-kf568]: "my-hostname-basic-c8291b40-2a3a-11ea-9252-0242ac110005-kf568", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:57:50.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-xbswf" for this suite.
Dec 29 12:57:56.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:57:56.556: INFO: namespace: e2e-tests-replicaset-xbswf, resource: bindings, ignored listing per whitelist
Dec 29 12:57:56.682: INFO: namespace e2e-tests-replicaset-xbswf deletion completed in 6.319316409s

• [SLOW TEST:22.585 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:57:56.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1229 12:58:27.615817       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 29 12:58:27.615: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:58:27.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cljd5" for this suite.
Dec 29 12:58:37.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:58:37.999: INFO: namespace: e2e-tests-gc-cljd5, resource: bindings, ignored listing per whitelist
Dec 29 12:58:38.049: INFO: namespace e2e-tests-gc-cljd5 deletion completed in 10.428694154s

• [SLOW TEST:41.366 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:58:38.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 12:58:38.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-vwkfv" to be "success or failure"
Dec 29 12:58:38.628: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063666ms
Dec 29 12:58:41.008: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391719774s
Dec 29 12:58:43.028: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41125207s
Dec 29 12:58:45.046: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429605657s
Dec 29 12:58:48.861: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.245206752s
Dec 29 12:58:51.008: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.391344022s
Dec 29 12:58:53.020: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.404015499s
Dec 29 12:58:55.034: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.417920692s
STEP: Saw pod success
Dec 29 12:58:55.034: INFO: Pod "downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 12:58:55.040: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 12:58:55.153: INFO: Waiting for pod downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005 to disappear
Dec 29 12:58:55.186: INFO: Pod downwardapi-volume-ee5f7028-2a3a-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:58:55.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vwkfv" for this suite.
Dec 29 12:59:01.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:59:01.327: INFO: namespace: e2e-tests-projected-vwkfv, resource: bindings, ignored listing per whitelist
Dec 29 12:59:01.412: INFO: namespace e2e-tests-projected-vwkfv deletion completed in 6.217690184s

• [SLOW TEST:23.362 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:59:01.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-fgwf
STEP: Creating a pod to test atomic-volume-subpath
Dec 29 12:59:01.597: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fgwf" in namespace "e2e-tests-subpath-dr2rz" to be "success or failure"
Dec 29 12:59:01.605: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.62871ms
Dec 29 12:59:03.643: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045168429s
Dec 29 12:59:05.762: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164582058s
Dec 29 12:59:08.317: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.718815959s
Dec 29 12:59:10.332: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73468842s
Dec 29 12:59:12.977: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.379029641s
Dec 29 12:59:14.998: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.400304375s
Dec 29 12:59:17.105: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.507343866s
Dec 29 12:59:19.119: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.520896601s
Dec 29 12:59:21.204: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.606617735s
Dec 29 12:59:23.231: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 21.633453226s
Dec 29 12:59:25.253: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 23.654912329s
Dec 29 12:59:27.274: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 25.676092438s
Dec 29 12:59:29.293: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 27.695557194s
Dec 29 12:59:31.312: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 29.714547384s
Dec 29 12:59:33.326: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 31.728732363s
Dec 29 12:59:35.342: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 33.744682908s
Dec 29 12:59:37.357: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 35.759577045s
Dec 29 12:59:39.403: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Running", Reason="", readiness=false. Elapsed: 37.804871521s
Dec 29 12:59:42.066: INFO: Pod "pod-subpath-test-projected-fgwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.46828653s
STEP: Saw pod success
Dec 29 12:59:42.066: INFO: Pod "pod-subpath-test-projected-fgwf" satisfied condition "success or failure"
Dec 29 12:59:42.129: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-fgwf container test-container-subpath-projected-fgwf: 
STEP: delete the pod
Dec 29 12:59:42.695: INFO: Waiting for pod pod-subpath-test-projected-fgwf to disappear
Dec 29 12:59:42.719: INFO: Pod pod-subpath-test-projected-fgwf no longer exists
STEP: Deleting pod pod-subpath-test-projected-fgwf
Dec 29 12:59:42.719: INFO: Deleting pod "pod-subpath-test-projected-fgwf" in namespace "e2e-tests-subpath-dr2rz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 12:59:42.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dr2rz" for this suite.
Dec 29 12:59:50.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 12:59:50.882: INFO: namespace: e2e-tests-subpath-dr2rz, resource: bindings, ignored listing per whitelist
Dec 29 12:59:50.946: INFO: namespace e2e-tests-subpath-dr2rz deletion completed in 8.203753487s

• [SLOW TEST:49.534 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 12:59:50.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-19c1714f-2a3b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 12:59:51.197: INFO: Waiting up to 5m0s for pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-6pd8n" to be "success or failure"
Dec 29 12:59:51.218: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.747316ms
Dec 29 12:59:53.236: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038801988s
Dec 29 12:59:55.258: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060262342s
Dec 29 12:59:57.299: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101710392s
Dec 29 13:00:00.408: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21070214s
Dec 29 13:00:02.619: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.421202403s
Dec 29 13:00:04.863: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 13.665808363s
Dec 29 13:00:06.882: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 15.684723494s
Dec 29 13:00:08.910: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.712307276s
STEP: Saw pod success
Dec 29 13:00:08.910: INFO: Pod "pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:00:08.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 13:00:09.249: INFO: Waiting for pod pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005 to disappear
Dec 29 13:00:09.280: INFO: Pod pod-secrets-19c24019-2a3b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:00:09.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6pd8n" for this suite.
Dec 29 13:00:15.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:00:15.537: INFO: namespace: e2e-tests-secrets-6pd8n, resource: bindings, ignored listing per whitelist
Dec 29 13:00:15.561: INFO: namespace e2e-tests-secrets-6pd8n deletion completed in 6.267603723s

• [SLOW TEST:24.614 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:00:15.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 13:00:15.883: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 29 13:00:20.958: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 29 13:00:26.980: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 29 13:00:28.996: INFO: Creating deployment "test-rollover-deployment"
Dec 29 13:00:29.164: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 29 13:00:31.806: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 29 13:00:32.200: INFO: Ensure that both replica sets have 1 created replica
Dec 29 13:00:32.232: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 29 13:00:32.257: INFO: Updating deployment test-rollover-deployment
Dec 29 13:00:32.257: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 29 13:00:34.959: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 29 13:00:34.991: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 29 13:00:35.010: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:35.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:37.051: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:37.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:39.042: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:39.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:41.975: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:41.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:43.114: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:43.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221234, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:45.034: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:45.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221244, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:47.032: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:47.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221244, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:49.030: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:49.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221244, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:51.028: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:51.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221244, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:53.040: INFO: all replica sets need to contain the pod-template-hash label
Dec 29 13:00:53.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221244, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:55.072: INFO: 
Dec 29 13:00:55.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221254, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221229, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:00:57.036: INFO: 
Dec 29 13:00:57.036: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 29 13:00:57.059: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-qlw9x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qlw9x/deployments/test-rollover-deployment,UID:304d7b1a-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465837,Generation:2,CreationTimestamp:2019-12-29 13:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-29 13:00:29 +0000 UTC 2019-12-29 13:00:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-29 13:00:55 +0000 UTC 2019-12-29 13:00:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 29 13:00:57.065: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-qlw9x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qlw9x/replicasets/test-rollover-deployment-5b8479fdb6,UID:323f82b1-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465828,Generation:2,CreationTimestamp:2019-12-29 13:00:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 304d7b1a-2a3b-11ea-a994-fa163e34d433 0xc002048ab7 0xc002048ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 29 13:00:57.065: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 29 13:00:57.066: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-qlw9x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qlw9x/replicasets/test-rollover-controller,UID:28681995-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465834,Generation:2,CreationTimestamp:2019-12-29 13:00:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 304d7b1a-2a3b-11ea-a994-fa163e34d433 0xc002048927 0xc002048928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 13:00:57.066: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-qlw9x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qlw9x/replicasets/test-rollover-deployment-58494b7559,UID:3084c21a-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465793,Generation:2,CreationTimestamp:2019-12-29 13:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 304d7b1a-2a3b-11ea-a994-fa163e34d433 0xc0020489e7 0xc0020489e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 13:00:57.075: INFO: Pod "test-rollover-deployment-5b8479fdb6-7kv7f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-7kv7f,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-qlw9x,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qlw9x/pods/test-rollover-deployment-5b8479fdb6-7kv7f,UID:32f40535-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465813,Generation:0,CreationTimestamp:2019-12-29 13:00:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 323f82b1-2a3b-11ea-a994-fa163e34d433 0xc0024e2e97 0xc0024e2e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wz6xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wz6xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wz6xg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024e2f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024e2f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:00:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:00:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:00:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:00:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-29 13:00:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-29 13:00:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9a52ec2e507772eb079d34d9a86efb7dec273f52618c0710746d8b20ccb14d53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:00:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qlw9x" for this suite.
Dec 29 13:01:05.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:01:05.815: INFO: namespace: e2e-tests-deployment-qlw9x, resource: bindings, ignored listing per whitelist
Dec 29 13:01:05.918: INFO: namespace e2e-tests-deployment-qlw9x deletion completed in 8.836803664s

• [SLOW TEST:50.357 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:01:05.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 13:01:06.391: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 29 13:01:06.441: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 29 13:01:11.454: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 29 13:01:19.483: INFO: Creating deployment "test-rolling-update-deployment"
Dec 29 13:01:19.516: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 29 13:01:19.553: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 29 13:01:21.905: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 29 13:01:22.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:01:25.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:01:26.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:01:28.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713221279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 29 13:01:30.981: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 29 13:01:31.382: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-v89sw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v89sw/deployments/test-rolling-update-deployment,UID:4e65349c-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465955,Generation:1,CreationTimestamp:2019-12-29 13:01:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-29 13:01:19 +0000 UTC 2019-12-29 13:01:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-29 13:01:29 +0000 UTC 2019-12-29 13:01:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 29 13:01:31.393: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-v89sw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v89sw/replicasets/test-rolling-update-deployment-75db98fb4c,UID:4e719c4a-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465945,Generation:1,CreationTimestamp:2019-12-29 13:01:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e65349c-2a3b-11ea-a994-fa163e34d433 0xc00197d367 0xc00197d368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 29 13:01:31.393: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 29 13:01:31.394: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-v89sw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v89sw/replicasets/test-rolling-update-controller,UID:4697e488-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465954,Generation:2,CreationTimestamp:2019-12-29 13:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e65349c-2a3b-11ea-a994-fa163e34d433 0xc00197d28f 0xc00197d2a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 29 13:01:31.405: INFO: Pod "test-rolling-update-deployment-75db98fb4c-qmcff" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-qmcff,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-v89sw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v89sw/pods/test-rolling-update-deployment-75db98fb4c-qmcff,UID:4e72e146-2a3b-11ea-a994-fa163e34d433,ResourceVersion:16465944,Generation:0,CreationTimestamp:2019-12-29 13:01:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 4e719c4a-2a3b-11ea-a994-fa163e34d433 0xc00089ee37 0xc00089ee38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lfv5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lfv5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-lfv5z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00089ef00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00089ef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:01:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:01:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:01:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-29 13:01:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-29 13:01:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-29 13:01:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9896f4721d5d16dd5f802ce264e00be7b97effe13843e8e5f0ee0f5b487b129b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:01:31.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-v89sw" for this suite.
Dec 29 13:01:42.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:01:42.661: INFO: namespace: e2e-tests-deployment-v89sw, resource: bindings, ignored listing per whitelist
Dec 29 13:01:42.710: INFO: namespace e2e-tests-deployment-v89sw deletion completed in 11.297148511s

• [SLOW TEST:36.792 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:01:42.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dkq2
STEP: Creating a pod to test atomic-volume-subpath
Dec 29 13:01:43.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dkq2" in namespace "e2e-tests-subpath-zkft8" to be "success or failure"
Dec 29 13:01:43.420: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 122.868014ms
Dec 29 13:01:46.080: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782813763s
Dec 29 13:01:48.112: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.814407115s
Dec 29 13:01:50.691: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.394064313s
Dec 29 13:01:52.786: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.488395429s
Dec 29 13:01:54.796: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.498362816s
Dec 29 13:01:56.833: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.536189764s
Dec 29 13:01:59.080: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.783191162s
Dec 29 13:02:01.094: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.79662979s
Dec 29 13:02:03.168: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.870380531s
Dec 29 13:02:05.188: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 21.891158554s
Dec 29 13:02:07.212: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 23.91465453s
Dec 29 13:02:09.227: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 25.929453912s
Dec 29 13:02:11.259: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 27.962025843s
Dec 29 13:02:13.275: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 29.978175141s
Dec 29 13:02:15.309: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 32.011393557s
Dec 29 13:02:17.324: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 34.02680569s
Dec 29 13:02:19.340: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Running", Reason="", readiness=false. Elapsed: 36.042489664s
Dec 29 13:02:21.390: INFO: Pod "pod-subpath-test-configmap-dkq2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.092744672s
STEP: Saw pod success
Dec 29 13:02:21.390: INFO: Pod "pod-subpath-test-configmap-dkq2" satisfied condition "success or failure"
Dec 29 13:02:21.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-dkq2 container test-container-subpath-configmap-dkq2: 
STEP: delete the pod
Dec 29 13:02:22.231: INFO: Waiting for pod pod-subpath-test-configmap-dkq2 to disappear
Dec 29 13:02:22.567: INFO: Pod pod-subpath-test-configmap-dkq2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dkq2
Dec 29 13:02:22.567: INFO: Deleting pod "pod-subpath-test-configmap-dkq2" in namespace "e2e-tests-subpath-zkft8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:02:22.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zkft8" for this suite.
Dec 29 13:02:30.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:02:31.010: INFO: namespace: e2e-tests-subpath-zkft8, resource: bindings, ignored listing per whitelist
Dec 29 13:02:31.205: INFO: namespace e2e-tests-subpath-zkft8 deletion completed in 8.564567096s

• [SLOW TEST:48.495 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:02:31.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 29 13:02:57.629: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:02:57.751: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:02:59.751: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:02:59.774: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:01.751: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:01.768: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:03.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:03.786: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:05.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:05.768: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:07.751: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:07.773: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:09.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:09.771: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:11.751: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:11.765: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:13.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:13.868: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:15.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:16.162: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:17.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:17.767: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:19.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:19.791: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 29 13:03:21.752: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 29 13:03:21.771: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:03:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f6jws" for this suite.
Dec 29 13:03:45.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:03:45.933: INFO: namespace: e2e-tests-container-lifecycle-hook-f6jws, resource: bindings, ignored listing per whitelist
Dec 29 13:03:46.115: INFO: namespace e2e-tests-container-lifecycle-hook-f6jws deletion completed in 24.274821405s

• [SLOW TEST:74.909 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:03:46.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 29 13:03:46.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-v6f5h'
Dec 29 13:03:48.667: INFO: stderr: ""
Dec 29 13:03:48.667: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 29 13:04:03.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-v6f5h -o json'
Dec 29 13:04:03.847: INFO: stderr: ""
Dec 29 13:04:03.847: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-29T13:03:48Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-v6f5h\",\n        \"resourceVersion\": \"16466247\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-v6f5h/pods/e2e-test-nginx-pod\",\n        \"uid\": \"a734ac31-2a3b-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-46bb8\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-46bb8\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-46bb8\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-29T13:03:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-29T13:04:01Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-29T13:04:01Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-29T13:03:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://34c4f262c820818d9012c729120696ce536fa6bf856bb9cb1ece51d3a403ddc0\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-29T13:04:00Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-29T13:03:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 29 13:04:03.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-v6f5h'
Dec 29 13:04:04.219: INFO: stderr: ""
Dec 29 13:04:04.219: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 29 13:04:04.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-v6f5h'
Dec 29 13:04:22.618: INFO: stderr: ""
Dec 29 13:04:22.619: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:04:22.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v6f5h" for this suite.
Dec 29 13:04:28.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:04:28.734: INFO: namespace: e2e-tests-kubectl-v6f5h, resource: bindings, ignored listing per whitelist
Dec 29 13:04:28.778: INFO: namespace e2e-tests-kubectl-v6f5h deletion completed in 6.14195354s

• [SLOW TEST:42.663 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:04:28.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-bf5cab75-2a3b-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:04:43.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-44qzs" for this suite.
Dec 29 13:05:07.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:05:07.370: INFO: namespace: e2e-tests-configmap-44qzs, resource: bindings, ignored listing per whitelist
Dec 29 13:05:07.540: INFO: namespace e2e-tests-configmap-44qzs deletion completed in 24.389866059s

• [SLOW TEST:38.762 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:05:07.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 29 13:05:07.865: INFO: Waiting up to 5m0s for pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005" in namespace "e2e-tests-emptydir-5rr8k" to be "success or failure"
Dec 29 13:05:07.888: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.739465ms
Dec 29 13:05:10.205: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339962133s
Dec 29 13:05:12.234: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369048382s
Dec 29 13:05:14.247: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382106052s
Dec 29 13:05:16.350: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484218838s
Dec 29 13:05:18.371: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.505859024s
STEP: Saw pod success
Dec 29 13:05:18.371: INFO: Pod "pod-d67ea6c0-2a3b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:05:18.380: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d67ea6c0-2a3b-11ea-9252-0242ac110005 container test-container: 
STEP: delete the pod
Dec 29 13:05:18.607: INFO: Waiting for pod pod-d67ea6c0-2a3b-11ea-9252-0242ac110005 to disappear
Dec 29 13:05:20.242: INFO: Pod pod-d67ea6c0-2a3b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:05:20.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5rr8k" for this suite.
Dec 29 13:05:26.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:05:27.030: INFO: namespace: e2e-tests-emptydir-5rr8k, resource: bindings, ignored listing per whitelist
Dec 29 13:05:27.089: INFO: namespace e2e-tests-emptydir-5rr8k deletion completed in 6.831147729s

• [SLOW TEST:19.548 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:05:27.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e215a738-2a3b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 13:05:27.425: INFO: Waiting up to 5m0s for pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-pc7z8" to be "success or failure"
Dec 29 13:05:27.440: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.375664ms
Dec 29 13:05:29.466: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039742232s
Dec 29 13:05:31.481: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055478374s
Dec 29 13:05:33.911: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.485198223s
Dec 29 13:05:35.934: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507557245s
Dec 29 13:05:37.947: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.520557888s
STEP: Saw pod success
Dec 29 13:05:37.947: INFO: Pod "pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:05:37.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 13:05:38.267: INFO: Waiting for pod pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005 to disappear
Dec 29 13:05:38.285: INFO: Pod pod-secrets-e216c171-2a3b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:05:38.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pc7z8" for this suite.
Dec 29 13:05:46.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:05:46.847: INFO: namespace: e2e-tests-secrets-pc7z8, resource: bindings, ignored listing per whitelist
Dec 29 13:05:46.873: INFO: namespace e2e-tests-secrets-pc7z8 deletion completed in 8.581074219s

• [SLOW TEST:19.784 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:05:46.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-edf6088d-2a3b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 13:05:47.250: INFO: Waiting up to 5m0s for pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005" in namespace "e2e-tests-configmap-2jksz" to be "success or failure"
Dec 29 13:05:47.401: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 151.36235ms
Dec 29 13:05:49.417: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167565683s
Dec 29 13:05:51.469: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218718411s
Dec 29 13:05:54.300: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.049764285s
Dec 29 13:05:56.316: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.066005704s
Dec 29 13:05:59.143: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.89327659s
STEP: Saw pod success
Dec 29 13:05:59.144: INFO: Pod "pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:05:59.165: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 29 13:05:59.642: INFO: Waiting for pod pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005 to disappear
Dec 29 13:05:59.682: INFO: Pod pod-configmaps-edf7ba0b-2a3b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:05:59.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2jksz" for this suite.
Dec 29 13:06:05.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:06:05.944: INFO: namespace: e2e-tests-configmap-2jksz, resource: bindings, ignored listing per whitelist
Dec 29 13:06:05.976: INFO: namespace e2e-tests-configmap-2jksz deletion completed in 6.276730512s

• [SLOW TEST:19.099 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:06:05.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f9473bf4-2a3b-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 13:06:06.213: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-xtkwh" to be "success or failure"
Dec 29 13:06:06.246: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.617279ms
Dec 29 13:06:08.673: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459763088s
Dec 29 13:06:10.692: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478801535s
Dec 29 13:06:12.714: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501035489s
Dec 29 13:06:14.765: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551743901s
Dec 29 13:06:16.788: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.574965875s
Dec 29 13:06:18.837: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.623666147s
STEP: Saw pod success
Dec 29 13:06:18.837: INFO: Pod "pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:06:19.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 13:06:19.498: INFO: Waiting for pod pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005 to disappear
Dec 29 13:06:19.515: INFO: Pod pod-projected-configmaps-f948308f-2a3b-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:06:19.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xtkwh" for this suite.
Dec 29 13:06:25.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:06:25.655: INFO: namespace: e2e-tests-projected-xtkwh, resource: bindings, ignored listing per whitelist
Dec 29 13:06:25.719: INFO: namespace e2e-tests-projected-xtkwh deletion completed in 6.197358427s

• [SLOW TEST:19.743 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:06:25.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 13:06:26.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 29 13:06:26.235: INFO: stderr: ""
Dec 29 13:06:26.235: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 29 13:06:26.252: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:06:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nplxh" for this suite.
Dec 29 13:06:32.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:06:32.450: INFO: namespace: e2e-tests-kubectl-nplxh, resource: bindings, ignored listing per whitelist
Dec 29 13:06:32.624: INFO: namespace e2e-tests-kubectl-nplxh deletion completed in 6.337412117s

S [SKIPPING] [6.904 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 29 13:06:26.252: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:06:32.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0929db2e-2a3c-11ea-9252-0242ac110005
STEP: Creating secret with name s-test-opt-upd-0929dc48-2a3c-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0929db2e-2a3c-11ea-9252-0242ac110005
STEP: Updating secret s-test-opt-upd-0929dc48-2a3c-11ea-9252-0242ac110005
STEP: Creating secret with name s-test-opt-create-0929dc8c-2a3c-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:08:14.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hn75b" for this suite.
Dec 29 13:08:40.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:08:40.923: INFO: namespace: e2e-tests-secrets-hn75b, resource: bindings, ignored listing per whitelist
Dec 29 13:08:41.067: INFO: namespace e2e-tests-secrets-hn75b deletion completed in 26.37161147s

• [SLOW TEST:128.443 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:08:41.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:08:55.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-52r4w" for this suite.
Dec 29 13:09:37.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:09:37.923: INFO: namespace: e2e-tests-kubelet-test-52r4w, resource: bindings, ignored listing per whitelist
Dec 29 13:09:38.030: INFO: namespace e2e-tests-kubelet-test-52r4w deletion completed in 42.232359898s

• [SLOW TEST:56.961 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:09:38.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 29 13:09:38.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005" in namespace "e2e-tests-downward-api-9xktd" to be "success or failure"
Dec 29 13:09:38.292: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.280131ms
Dec 29 13:09:40.855: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.588415087s
Dec 29 13:09:42.890: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.623619903s
Dec 29 13:09:45.315: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.048710662s
Dec 29 13:09:47.428: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.161832362s
Dec 29 13:09:49.502: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.235886424s
STEP: Saw pod success
Dec 29 13:09:49.503: INFO: Pod "downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:09:49.534: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005 container client-container: 
STEP: delete the pod
Dec 29 13:09:50.535: INFO: Waiting for pod downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005 to disappear
Dec 29 13:09:50.687: INFO: Pod downwardapi-volume-77ad1309-2a3c-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:09:50.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9xktd" for this suite.
Dec 29 13:09:56.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:09:57.036: INFO: namespace: e2e-tests-downward-api-9xktd, resource: bindings, ignored listing per whitelist
Dec 29 13:09:57.039: INFO: namespace e2e-tests-downward-api-9xktd deletion completed in 6.338566548s

• [SLOW TEST:19.008 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:09:57.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 29 13:09:57.227: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-pd8kq" to be "success or failure"
Dec 29 13:09:57.250: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.22611ms
Dec 29 13:09:59.465: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237327735s
Dec 29 13:10:01.498: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270162213s
Dec 29 13:10:03.513: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285919435s
Dec 29 13:10:06.601: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.373464354s
Dec 29 13:10:08.616: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.388654244s
Dec 29 13:10:10.657: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.429631489s
STEP: Saw pod success
Dec 29 13:10:10.658: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 29 13:10:10.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 29 13:10:11.255: INFO: Waiting for pod pod-host-path-test to disappear
Dec 29 13:10:11.277: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:10:11.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-pd8kq" for this suite.
Dec 29 13:10:17.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:10:17.733: INFO: namespace: e2e-tests-hostpath-pd8kq, resource: bindings, ignored listing per whitelist
Dec 29 13:10:17.759: INFO: namespace e2e-tests-hostpath-pd8kq deletion completed in 6.453938171s

• [SLOW TEST:20.720 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:10:17.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-8f5eb6a1-2a3c-11ea-9252-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-8f5eb719-2a3c-11ea-9252-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8f5eb6a1-2a3c-11ea-9252-0242ac110005
STEP: Updating configmap cm-test-opt-upd-8f5eb719-2a3c-11ea-9252-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-8f5eb747-2a3c-11ea-9252-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:10:36.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gzgvt" for this suite.
Dec 29 13:11:00.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:11:00.848: INFO: namespace: e2e-tests-configmap-gzgvt, resource: bindings, ignored listing per whitelist
Dec 29 13:11:00.873: INFO: namespace e2e-tests-configmap-gzgvt deletion completed in 24.188929827s

• [SLOW TEST:43.113 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:11:00.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6qjpd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 29 13:11:01.051: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 29 13:11:39.272: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6qjpd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 29 13:11:39.272: INFO: >>> kubeConfig: /root/.kube/config
Dec 29 13:11:39.730: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:11:39.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-6qjpd" for this suite.
Dec 29 13:12:03.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:12:04.012: INFO: namespace: e2e-tests-pod-network-test-6qjpd, resource: bindings, ignored listing per whitelist
Dec 29 13:12:04.053: INFO: namespace e2e-tests-pod-network-test-6qjpd deletion completed in 24.303704273s

• [SLOW TEST:63.181 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:12:04.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 29 13:12:04.534: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467187,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 29 13:12:04.534: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467187,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 29 13:12:14.594: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467199,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 29 13:12:14.595: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467199,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 29 13:12:24.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467212,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 29 13:12:24.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467212,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 29 13:12:34.681: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467225,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 29 13:12:34.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-a,UID:cedc7f55-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467225,Generation:0,CreationTimestamp:2019-12-29 13:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 29 13:12:44.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-b,UID:e6ce01d7-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467238,Generation:0,CreationTimestamp:2019-12-29 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 29 13:12:44.709: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-b,UID:e6ce01d7-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467238,Generation:0,CreationTimestamp:2019-12-29 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 29 13:12:54.728: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-b,UID:e6ce01d7-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467251,Generation:0,CreationTimestamp:2019-12-29 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 29 13:12:54.728: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-754rf,SelfLink:/api/v1/namespaces/e2e-tests-watch-754rf/configmaps/e2e-watch-test-configmap-b,UID:e6ce01d7-2a3c-11ea-a994-fa163e34d433,ResourceVersion:16467251,Generation:0,CreationTimestamp:2019-12-29 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:13:04.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-754rf" for this suite.
Dec 29 13:13:10.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:13:10.938: INFO: namespace: e2e-tests-watch-754rf, resource: bindings, ignored listing per whitelist
Dec 29 13:13:11.070: INFO: namespace e2e-tests-watch-754rf deletion completed in 6.277339896s

• [SLOW TEST:67.017 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:13:11.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 13:13:11.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 29 13:13:11.400: INFO: stderr: ""
Dec 29 13:13:11.401: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:13:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-szf7v" for this suite.
Dec 29 13:13:17.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:13:17.624: INFO: namespace: e2e-tests-kubectl-szf7v, resource: bindings, ignored listing per whitelist
Dec 29 13:13:17.634: INFO: namespace e2e-tests-kubectl-szf7v deletion completed in 6.220973319s

• [SLOW TEST:6.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:13:17.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 29 13:13:18.244: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fa985bc6-2a3c-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002383ff2), BlockOwnerDeletion:(*bool)(0xc002383ff3)}}
Dec 29 13:13:18.446: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fa83d25c-2a3c-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0022ca4d2), BlockOwnerDeletion:(*bool)(0xc0022ca4d3)}}
Dec 29 13:13:18.467: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fa86314f-2a3c-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001bca14a), BlockOwnerDeletion:(*bool)(0xc001bca14b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:13:23.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jxxfx" for this suite.
Dec 29 13:13:31.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:13:31.808: INFO: namespace: e2e-tests-gc-jxxfx, resource: bindings, ignored listing per whitelist
Dec 29 13:13:31.971: INFO: namespace e2e-tests-gc-jxxfx deletion completed in 8.401083657s

• [SLOW TEST:14.337 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:13:31.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 29 13:13:32.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cgdpq'
Dec 29 13:13:32.654: INFO: stderr: ""
Dec 29 13:13:32.654: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 29 13:13:33.836: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:33.837: INFO: Found 0 / 1
Dec 29 13:13:34.759: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:34.760: INFO: Found 0 / 1
Dec 29 13:13:35.689: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:35.690: INFO: Found 0 / 1
Dec 29 13:13:36.672: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:36.672: INFO: Found 0 / 1
Dec 29 13:13:38.473: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:38.473: INFO: Found 0 / 1
Dec 29 13:13:38.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:38.691: INFO: Found 0 / 1
Dec 29 13:13:39.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:39.690: INFO: Found 0 / 1
Dec 29 13:13:40.697: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:40.697: INFO: Found 0 / 1
Dec 29 13:13:41.701: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:41.702: INFO: Found 1 / 1
Dec 29 13:13:41.702: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 29 13:13:41.713: INFO: Selector matched 1 pods for map[app:redis]
Dec 29 13:13:41.713: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 29 13:13:41.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq'
Dec 29 13:13:41.966: INFO: stderr: ""
Dec 29 13:13:41.966: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Dec 13:13:40.215 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Dec 13:13:40.215 # Server started, Redis version 3.2.12\n1:M 29 Dec 13:13:40.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Dec 13:13:40.215 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 29 13:13:41.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq --tail=1'
Dec 29 13:13:42.185: INFO: stderr: ""
Dec 29 13:13:42.185: INFO: stdout: "1:M 29 Dec 13:13:40.215 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 29 13:13:42.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq --limit-bytes=1'
Dec 29 13:13:42.350: INFO: stderr: ""
Dec 29 13:13:42.350: INFO: stdout: " "
STEP: exposing timestamps
Dec 29 13:13:42.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq --tail=1 --timestamps'
Dec 29 13:13:42.526: INFO: stderr: ""
Dec 29 13:13:42.526: INFO: stdout: "2019-12-29T13:13:40.216511949Z 1:M 29 Dec 13:13:40.215 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 29 13:13:45.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq --since=1s'
Dec 29 13:13:45.225: INFO: stderr: ""
Dec 29 13:13:45.226: INFO: stdout: ""
Dec 29 13:13:45.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4wncn redis-master --namespace=e2e-tests-kubectl-cgdpq --since=24h'
Dec 29 13:13:45.373: INFO: stderr: ""
Dec 29 13:13:45.373: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Dec 13:13:40.215 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Dec 13:13:40.215 # Server started, Redis version 3.2.12\n1:M 29 Dec 13:13:40.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Dec 13:13:40.215 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 29 13:13:45.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cgdpq'
Dec 29 13:13:45.475: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 29 13:13:45.475: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 29 13:13:45.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-cgdpq'
Dec 29 13:13:45.689: INFO: stderr: "No resources found.\n"
Dec 29 13:13:45.690: INFO: stdout: ""
Dec 29 13:13:45.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-cgdpq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 29 13:13:45.832: INFO: stderr: ""
Dec 29 13:13:45.832: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:13:45.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cgdpq" for this suite.
Dec 29 13:13:53.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:13:54.108: INFO: namespace: e2e-tests-kubectl-cgdpq, resource: bindings, ignored listing per whitelist
Dec 29 13:13:54.169: INFO: namespace e2e-tests-kubectl-cgdpq deletion completed in 8.228616813s

• [SLOW TEST:22.197 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:13:54.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 29 13:13:54.819: INFO: Waiting up to 5m0s for pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k" in namespace "e2e-tests-svcaccounts-xbm7k" to be "success or failure"
Dec 29 13:13:54.845: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 25.316156ms
Dec 29 13:13:56.869: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049376015s
Dec 29 13:13:58.894: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074418834s
Dec 29 13:14:00.911: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091005693s
Dec 29 13:14:03.011: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190995725s
Dec 29 13:14:05.025: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205172363s
Dec 29 13:14:07.838: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 13.018782948s
Dec 29 13:14:09.857: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 15.03768892s
Dec 29 13:14:12.530: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Pending", Reason="", readiness=false. Elapsed: 17.709919676s
Dec 29 13:14:14.557: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.737773353s
STEP: Saw pod success
Dec 29 13:14:14.558: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k" satisfied condition "success or failure"
Dec 29 13:14:14.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k container token-test: 
STEP: delete the pod
Dec 29 13:14:14.922: INFO: Waiting for pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k to disappear
Dec 29 13:14:14.948: INFO: Pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-fcj7k no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 29 13:14:14.967: INFO: Waiting up to 5m0s for pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l" in namespace "e2e-tests-svcaccounts-xbm7k" to be "success or failure"
Dec 29 13:14:15.253: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 285.864607ms
Dec 29 13:14:17.378: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411126131s
Dec 29 13:14:19.397: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429882739s
Dec 29 13:14:21.426: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458670204s
Dec 29 13:14:24.165: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 9.197480999s
Dec 29 13:14:26.185: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 11.217799485s
Dec 29 13:14:28.280: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 13.312825178s
Dec 29 13:14:30.406: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 15.438955921s
Dec 29 13:14:33.012: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 18.044875959s
Dec 29 13:14:35.036: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069006599s
Dec 29 13:14:37.069: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 22.101448418s
Dec 29 13:14:39.082: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Pending", Reason="", readiness=false. Elapsed: 24.114813041s
Dec 29 13:14:41.097: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.129744401s
STEP: Saw pod success
Dec 29 13:14:41.097: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l" satisfied condition "success or failure"
Dec 29 13:14:41.101: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l container root-ca-test: 
STEP: delete the pod
Dec 29 13:14:41.450: INFO: Waiting for pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l to disappear
Dec 29 13:14:41.752: INFO: Pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-nz79l no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 29 13:14:41.815: INFO: Waiting up to 5m0s for pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt" in namespace "e2e-tests-svcaccounts-xbm7k" to be "success or failure"
Dec 29 13:14:41.998: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 182.926907ms
Dec 29 13:14:44.023: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207552541s
Dec 29 13:14:46.048: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232322935s
Dec 29 13:14:48.987: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.172027488s
Dec 29 13:14:51.002: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.186598919s
Dec 29 13:14:53.045: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.229707965s
Dec 29 13:14:55.622: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.806772465s
Dec 29 13:14:58.017: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.202134559s
Dec 29 13:15:00.101: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.286106329s
Dec 29 13:15:02.194: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.379005805s
Dec 29 13:15:04.224: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 22.408560854s
Dec 29 13:15:06.258: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Pending", Reason="", readiness=false. Elapsed: 24.442661391s
Dec 29 13:15:08.331: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.5160684s
STEP: Saw pod success
Dec 29 13:15:08.332: INFO: Pod "pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt" satisfied condition "success or failure"
Dec 29 13:15:08.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt container namespace-test: 
STEP: delete the pod
Dec 29 13:15:10.823: INFO: Waiting for pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt to disappear
Dec 29 13:15:11.468: INFO: Pod pod-service-account-10986d13-2a3d-11ea-9252-0242ac110005-h7ggt no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:15:11.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-xbm7k" for this suite.
Dec 29 13:15:19.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:15:19.971: INFO: namespace: e2e-tests-svcaccounts-xbm7k, resource: bindings, ignored listing per whitelist
Dec 29 13:15:20.055: INFO: namespace e2e-tests-svcaccounts-xbm7k deletion completed in 8.565958183s

• [SLOW TEST:85.886 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:15:20.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 29 13:15:33.132: INFO: Successfully updated pod "annotationupdate43906340-2a3d-11ea-9252-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:15:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qmlph" for this suite.
Dec 29 13:15:59.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:15:59.752: INFO: namespace: e2e-tests-downward-api-qmlph, resource: bindings, ignored listing per whitelist
Dec 29 13:15:59.763: INFO: namespace e2e-tests-downward-api-qmlph deletion completed in 24.388382915s

• [SLOW TEST:39.707 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:15:59.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5b39fd79-2a3d-11ea-9252-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 29 13:16:00.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005" in namespace "e2e-tests-projected-ltjf8" to be "success or failure"
Dec 29 13:16:00.109: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.428244ms
Dec 29 13:16:02.124: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083042025s
Dec 29 13:16:04.139: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098019255s
Dec 29 13:16:06.353: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311389258s
Dec 29 13:16:08.373: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33130677s
Dec 29 13:16:10.386: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.34504015s
Dec 29 13:16:12.407: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.365522955s
STEP: Saw pod success
Dec 29 13:16:12.407: INFO: Pod "pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:16:12.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 29 13:16:14.235: INFO: Waiting for pod pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005 to disappear
Dec 29 13:16:14.253: INFO: Pod pod-projected-configmaps-5b3bad6e-2a3d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:16:14.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ltjf8" for this suite.
Dec 29 13:16:20.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:16:20.458: INFO: namespace: e2e-tests-projected-ltjf8, resource: bindings, ignored listing per whitelist
Dec 29 13:16:20.644: INFO: namespace e2e-tests-projected-ltjf8 deletion completed in 6.377810713s

• [SLOW TEST:20.881 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:16:20.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:16:21.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-wmsm7" for this suite.
Dec 29 13:16:27.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:16:27.310: INFO: namespace: e2e-tests-kubelet-test-wmsm7, resource: bindings, ignored listing per whitelist
Dec 29 13:16:27.335: INFO: namespace e2e-tests-kubelet-test-wmsm7 deletion completed in 6.314496576s

• [SLOW TEST:6.691 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:16:27.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:16:41.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-rmm75" for this suite.
Dec 29 13:17:39.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:17:39.984: INFO: namespace: e2e-tests-kubelet-test-rmm75, resource: bindings, ignored listing per whitelist
Dec 29 13:17:40.020: INFO: namespace e2e-tests-kubelet-test-rmm75 deletion completed in 58.230809036s

• [SLOW TEST:72.684 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:17:40.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:17:47.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-4ftsv" for this suite.
Dec 29 13:17:53.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:17:53.354: INFO: namespace: e2e-tests-namespaces-4ftsv, resource: bindings, ignored listing per whitelist
Dec 29 13:17:53.428: INFO: namespace e2e-tests-namespaces-4ftsv deletion completed in 6.163379635s
STEP: Destroying namespace "e2e-tests-nsdeletetest-d69nl" for this suite.
Dec 29 13:17:53.432: INFO: Namespace e2e-tests-nsdeletetest-d69nl was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-gxzgb" for this suite.
Dec 29 13:18:01.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:18:01.616: INFO: namespace: e2e-tests-nsdeletetest-gxzgb, resource: bindings, ignored listing per whitelist
Dec 29 13:18:01.691: INFO: namespace e2e-tests-nsdeletetest-gxzgb deletion completed in 8.258591774s

• [SLOW TEST:21.671 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 29 13:18:01.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a3e9f54e-2a3d-11ea-9252-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 29 13:18:02.095: INFO: Waiting up to 5m0s for pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005" in namespace "e2e-tests-secrets-f5mc5" to be "success or failure"
Dec 29 13:18:02.102: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074597ms
Dec 29 13:18:04.716: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620269503s
Dec 29 13:18:06.729: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.632933521s
Dec 29 13:18:08.745: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649764241s
Dec 29 13:18:11.302: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.206855987s
Dec 29 13:18:13.388: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.292184788s
Dec 29 13:18:15.406: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.310439201s
Dec 29 13:18:18.471: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.375682796s
STEP: Saw pod success
Dec 29 13:18:18.472: INFO: Pod "pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005" satisfied condition "success or failure"
Dec 29 13:18:18.531: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 29 13:18:19.717: INFO: Waiting for pod pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005 to disappear
Dec 29 13:18:19.738: INFO: Pod pod-secrets-a3f38838-2a3d-11ea-9252-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 29 13:18:19.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-f5mc5" for this suite.
Dec 29 13:18:28.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 29 13:18:28.136: INFO: namespace: e2e-tests-secrets-f5mc5, resource: bindings, ignored listing per whitelist
Dec 29 13:18:28.163: INFO: namespace e2e-tests-secrets-f5mc5 deletion completed in 8.412182889s

• [SLOW TEST:26.471 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSDec 29 13:18:28.164: INFO: Running AfterSuite actions on all nodes
Dec 29 13:18:28.164: INFO: Running AfterSuite actions on node 1
Dec 29 13:18:28.164: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Secrets [It] should be consumable from pods in env vars [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395

Ran 199 of 2164 Specs in 9080.976 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9081.69s)
FAIL