I1226 10:47:16.408542 8 e2e.go:224] Starting e2e run "144a649d-27cd-11ea-948a-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577357234 - Will randomize all specs Will run 201 of 2164 specs Dec 26 10:47:17.365: INFO: >>> kubeConfig: /root/.kube/config Dec 26 10:47:17.378: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 26 10:47:17.398: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 26 10:47:17.443: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 26 10:47:17.443: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 26 10:47:17.443: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 26 10:47:17.456: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 26 10:47:17.456: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 26 10:47:17.456: INFO: e2e test version: v1.13.12 Dec 26 10:47:17.458: INFO: kube-apiserver version: v1.13.8 [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:47:17.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Dec 26 10:47:17.603: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 10:47:17.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4f7p7' Dec 26 10:47:19.568: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 10:47:19.568: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 26 10:47:19.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-4f7p7' Dec 26 10:47:19.912: INFO: stderr: "" Dec 26 10:47:19.912: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:47:19.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4f7p7" for this suite. Dec 26 10:47:28.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:47:28.538: INFO: namespace: e2e-tests-kubectl-4f7p7, resource: bindings, ignored listing per whitelist Dec 26 10:47:28.732: INFO: namespace e2e-tests-kubectl-4f7p7 deletion completed in 8.808531907s • [SLOW TEST:11.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:47:28.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 26 10:47:28.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xmlq2' Dec 26 10:47:29.498: INFO: stderr: "" Dec 26 10:47:29.498: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 26 10:47:30.881: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:30.881: INFO: Found 0 / 1 Dec 26 10:47:31.513: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:31.513: INFO: Found 0 / 1 Dec 26 10:47:32.541: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:32.541: INFO: Found 0 / 1 Dec 26 10:47:33.509: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:33.509: INFO: Found 0 / 1 Dec 26 10:47:34.927: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:34.927: INFO: Found 0 / 1 Dec 26 10:47:36.598: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:36.598: INFO: Found 0 / 1 Dec 26 10:47:37.522: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:37.522: INFO: Found 0 / 1 Dec 26 10:47:38.545: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:38.545: INFO: Found 0 / 1 Dec 26 10:47:39.519: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:39.519: INFO: Found 1 / 1 Dec 26 10:47:39.519: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 26 10:47:39.537: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:39.537: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 26 10:47:39.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-t82d6 --namespace=e2e-tests-kubectl-xmlq2 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 26 10:47:39.719: INFO: stderr: "" Dec 26 10:47:39.719: INFO: stdout: "pod/redis-master-t82d6 patched\n" STEP: checking annotations Dec 26 10:47:39.732: INFO: Selector matched 1 pods for map[app:redis] Dec 26 10:47:39.733: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:47:39.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xmlq2" for this suite. Dec 26 10:48:03.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:48:04.275: INFO: namespace: e2e-tests-kubectl-xmlq2, resource: bindings, ignored listing per whitelist Dec 26 10:48:04.281: INFO: namespace e2e-tests-kubectl-xmlq2 deletion completed in 24.541967888s • [SLOW TEST:35.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:48:04.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-pvn8 STEP: Creating a pod to test atomic-volume-subpath Dec 26 10:48:04.651: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pvn8" in namespace "e2e-tests-subpath-kqjlx" to be "success or failure" Dec 26 10:48:04.659: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120607ms Dec 26 10:48:06.680: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029261637s Dec 26 10:48:08.700: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049256118s Dec 26 10:48:10.887: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236239645s Dec 26 10:48:12.904: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253740148s Dec 26 10:48:14.916: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.26580196s Dec 26 10:48:16.932: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281325716s Dec 26 10:48:18.956: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.305075116s Dec 26 10:48:20.979: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 16.328763039s Dec 26 10:48:22.993: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 18.342732587s Dec 26 10:48:25.022: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 20.371824244s Dec 26 10:48:27.037: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 22.386275273s Dec 26 10:48:29.056: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 24.405461544s Dec 26 10:48:31.071: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 26.420878323s Dec 26 10:48:33.091: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 28.440225097s Dec 26 10:48:35.119: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 30.468769042s Dec 26 10:48:37.152: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 32.501856655s Dec 26 10:48:39.182: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Running", Reason="", readiness=false. Elapsed: 34.531390062s Dec 26 10:48:41.198: INFO: Pod "pod-subpath-test-configmap-pvn8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.547699202s STEP: Saw pod success Dec 26 10:48:41.198: INFO: Pod "pod-subpath-test-configmap-pvn8" satisfied condition "success or failure" Dec 26 10:48:41.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pvn8 container test-container-subpath-configmap-pvn8: STEP: delete the pod Dec 26 10:48:41.625: INFO: Waiting for pod pod-subpath-test-configmap-pvn8 to disappear Dec 26 10:48:41.744: INFO: Pod pod-subpath-test-configmap-pvn8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-pvn8 Dec 26 10:48:41.744: INFO: Deleting pod "pod-subpath-test-configmap-pvn8" in namespace "e2e-tests-subpath-kqjlx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:48:41.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kqjlx" for this suite. Dec 26 10:48:47.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:48:47.992: INFO: namespace: e2e-tests-subpath-kqjlx, resource: bindings, ignored listing per whitelist Dec 26 10:48:48.065: INFO: namespace e2e-tests-subpath-kqjlx deletion completed in 6.298979676s • [SLOW TEST:43.783 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:48:48.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-4be1e5fb-27cd-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4be1e5fb-27cd-11ea-948a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:49:00.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-65sts" for this suite. Dec 26 10:49:24.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:49:25.007: INFO: namespace: e2e-tests-configmap-65sts, resource: bindings, ignored listing per whitelist Dec 26 10:49:25.066: INFO: namespace e2e-tests-configmap-65sts deletion completed in 24.250641027s • [SLOW TEST:37.001 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:49:25.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Dec 26 10:49:25.283: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-dgrxb" to be "success or failure" Dec 26 10:49:25.312: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.244457ms Dec 26 10:49:27.340: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056729842s Dec 26 10:49:29.417: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133998575s Dec 26 10:49:31.438: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154910813s Dec 26 10:49:33.469: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185827922s Dec 26 10:49:35.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.337154732s Dec 26 10:49:37.656: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372385879s Dec 26 10:49:39.671: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.387212772s STEP: Saw pod success Dec 26 10:49:39.671: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 26 10:49:39.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 26 10:49:39.800: INFO: Waiting for pod pod-host-path-test to disappear Dec 26 10:49:39.810: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:49:39.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-dgrxb" for this suite. Dec 26 10:49:45.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:49:46.113: INFO: namespace: e2e-tests-hostpath-dgrxb, resource: bindings, ignored listing per whitelist Dec 26 10:49:46.113: INFO: namespace e2e-tests-hostpath-dgrxb deletion completed in 6.296052578s • [SLOW TEST:21.047 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:49:46.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:49:56.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gg5vt" for this suite. Dec 26 10:50:44.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:50:44.714: INFO: namespace: e2e-tests-kubelet-test-gg5vt, resource: bindings, ignored listing per whitelist Dec 26 10:50:44.784: INFO: namespace e2e-tests-kubelet-test-gg5vt deletion completed in 48.176946004s • [SLOW TEST:58.670 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:50:44.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9172a597-27cd-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 10:50:45.047: INFO: Waiting up to 5m0s for pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-6p7xw" to be "success or failure" Dec 26 10:50:45.057: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266929ms Dec 26 10:50:47.445: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397901977s Dec 26 10:50:49.461: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414447397s Dec 26 10:50:51.482: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435399814s Dec 26 10:50:53.593: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546055419s Dec 26 10:50:55.605: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.557993011s STEP: Saw pod success Dec 26 10:50:55.605: INFO: Pod "pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 10:50:55.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 26 10:50:56.139: INFO: Waiting for pod pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005 to disappear Dec 26 10:50:56.547: INFO: Pod pod-configmaps-9173f553-27cd-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:50:56.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6p7xw" for this suite. Dec 26 10:51:02.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:51:02.970: INFO: namespace: e2e-tests-configmap-6p7xw, resource: bindings, ignored listing per whitelist Dec 26 10:51:03.021: INFO: namespace e2e-tests-configmap-6p7xw deletion completed in 6.418718104s • [SLOW TEST:18.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:51:03.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 10:51:03.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 26 10:51:03.407: INFO: stderr: "" Dec 26 10:51:03.407: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:51:03.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hh7dq" for this suite. Dec 26 10:51:09.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:51:09.655: INFO: namespace: e2e-tests-kubectl-hh7dq, resource: bindings, ignored listing per whitelist Dec 26 10:51:09.716: INFO: namespace e2e-tests-kubectl-hh7dq deletion completed in 6.278502943s • [SLOW TEST:6.695 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:51:09.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-4l58 STEP: Creating a pod to test atomic-volume-subpath Dec 26 10:51:09.970: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4l58" in namespace "e2e-tests-subpath-dhgm8" to be "success or failure" Dec 26 10:51:09.979: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267447ms Dec 26 10:51:12.145: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174933875s Dec 26 10:51:14.177: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207103146s Dec 26 10:51:16.205: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235389139s Dec 26 10:51:18.361: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.390625524s Dec 26 10:51:20.387: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417402913s Dec 26 10:51:22.514: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 12.544571059s Dec 26 10:51:24.533: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Pending", Reason="", readiness=false. Elapsed: 14.562660963s Dec 26 10:51:26.577: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 16.606939726s Dec 26 10:51:28.605: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 18.635447582s Dec 26 10:51:30.644: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 20.673756557s Dec 26 10:51:32.659: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 22.689328968s Dec 26 10:51:34.678: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 24.708604209s Dec 26 10:51:36.694: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 26.724042294s Dec 26 10:51:38.720: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 28.750318641s Dec 26 10:51:40.739: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 30.769106067s Dec 26 10:51:42.748: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Running", Reason="", readiness=false. Elapsed: 32.778179013s Dec 26 10:51:44.762: INFO: Pod "pod-subpath-test-secret-4l58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.792115532s STEP: Saw pod success Dec 26 10:51:44.762: INFO: Pod "pod-subpath-test-secret-4l58" satisfied condition "success or failure" Dec 26 10:51:44.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-4l58 container test-container-subpath-secret-4l58: STEP: delete the pod Dec 26 10:51:45.258: INFO: Waiting for pod pod-subpath-test-secret-4l58 to disappear Dec 26 10:51:45.801: INFO: Pod pod-subpath-test-secret-4l58 no longer exists STEP: Deleting pod pod-subpath-test-secret-4l58 Dec 26 10:51:45.801: INFO: Deleting pod "pod-subpath-test-secret-4l58" in namespace "e2e-tests-subpath-dhgm8" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:51:45.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dhgm8" for this suite. Dec 26 10:51:52.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:51:52.362: INFO: namespace: e2e-tests-subpath-dhgm8, resource: bindings, ignored listing per whitelist Dec 26 10:51:52.386: INFO: namespace e2e-tests-subpath-dhgm8 deletion completed in 6.419256281s • [SLOW TEST:42.669 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:51:52.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-b9bead1f-27cd-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 10:51:52.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-gdkck" to be "success or failure" Dec 26 10:51:52.834: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.102141ms Dec 26 10:51:54.873: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061209185s Dec 26 10:51:56.894: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082176225s Dec 26 10:51:59.308: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496453497s Dec 26 10:52:01.799: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.987412601s Dec 26 10:52:03.814: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.001945746s STEP: Saw pod success Dec 26 10:52:03.814: INFO: Pod "pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 10:52:03.818: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 26 10:52:04.403: INFO: Waiting for pod pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005 to disappear Dec 26 10:52:04.470: INFO: Pod pod-configmaps-b9c14b79-27cd-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:52:04.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gdkck" for this suite. Dec 26 10:52:10.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:52:10.759: INFO: namespace: e2e-tests-configmap-gdkck, resource: bindings, ignored listing per whitelist Dec 26 10:52:10.847: INFO: namespace e2e-tests-configmap-gdkck deletion completed in 6.293871092s • [SLOW TEST:18.460 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:52:10.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 26 10:52:11.121: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4mh8n,SelfLink:/api/v1/namespaces/e2e-tests-watch-4mh8n/configmaps/e2e-watch-test-resource-version,UID:c4bf27c4-27cd-11ea-a994-fa163e34d433,ResourceVersion:16113311,Generation:0,CreationTimestamp:2019-12-26 10:52:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 10:52:11.121: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4mh8n,SelfLink:/api/v1/namespaces/e2e-tests-watch-4mh8n/configmaps/e2e-watch-test-resource-version,UID:c4bf27c4-27cd-11ea-a994-fa163e34d433,ResourceVersion:16113312,Generation:0,CreationTimestamp:2019-12-26 10:52:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:52:11.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4mh8n" for this suite. Dec 26 10:52:17.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:52:17.453: INFO: namespace: e2e-tests-watch-4mh8n, resource: bindings, ignored listing per whitelist Dec 26 10:52:17.495: INFO: namespace e2e-tests-watch-4mh8n deletion completed in 6.367974041s • [SLOW TEST:6.648 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:52:17.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-c8a921f3-27cd-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 10:52:17.772: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-v4l5s" to be "success or failure" Dec 26 10:52:17.793: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.906964ms Dec 26 10:52:19.935: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162460405s Dec 26 10:52:21.963: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189648885s Dec 26 10:52:24.049: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27603005s Dec 26 10:52:26.061: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287979248s Dec 26 10:52:28.070: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297091593s STEP: Saw pod success Dec 26 10:52:28.070: INFO: Pod "pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 10:52:28.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 26 10:52:29.388: INFO: Waiting for pod pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005 to disappear Dec 26 10:52:29.446: INFO: Pod pod-projected-secrets-c8aa176e-27cd-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:52:29.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v4l5s" for this suite. Dec 26 10:52:35.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:52:35.674: INFO: namespace: e2e-tests-projected-v4l5s, resource: bindings, ignored listing per whitelist Dec 26 10:52:35.839: INFO: namespace e2e-tests-projected-v4l5s deletion completed in 6.269451502s • [SLOW TEST:18.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:52:35.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 26 10:52:46.711: INFO: Successfully updated pod "pod-update-d3ad77cf-27cd-11ea-948a-0242ac110005" STEP: verifying the updated pod is in kubernetes Dec 26 10:52:46.723: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:52:46.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-twsbw" for this suite. Dec 26 10:53:10.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:53:10.875: INFO: namespace: e2e-tests-pods-twsbw, resource: bindings, ignored listing per whitelist Dec 26 10:53:10.932: INFO: namespace e2e-tests-pods-twsbw deletion completed in 24.201538195s • [SLOW TEST:35.091 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:53:10.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 26 10:53:11.145: INFO: Waiting up to 5m0s for pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-ld4v9" to be "success or failure" Dec 26 10:53:11.173: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.422604ms Dec 26 10:53:13.219: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074201662s Dec 26 10:53:15.243: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097780642s Dec 26 10:53:17.377: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231663261s Dec 26 10:53:19.401: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255703511s Dec 26 10:53:21.419: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.274483977s STEP: Saw pod success Dec 26 10:53:21.420: INFO: Pod "downward-api-e8893352-27cd-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 10:53:21.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e8893352-27cd-11ea-948a-0242ac110005 container dapi-container: STEP: delete the pod Dec 26 10:53:21.945: INFO: Waiting for pod downward-api-e8893352-27cd-11ea-948a-0242ac110005 to disappear Dec 26 10:53:22.196: INFO: Pod downward-api-e8893352-27cd-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:53:22.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ld4v9" for this suite. Dec 26 10:53:28.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:53:28.448: INFO: namespace: e2e-tests-downward-api-ld4v9, resource: bindings, ignored listing per whitelist Dec 26 10:53:28.665: INFO: namespace e2e-tests-downward-api-ld4v9 deletion completed in 6.455812065s • [SLOW TEST:17.732 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:53:28.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 26 10:53:28.863: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:53:46.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-j2d67" for this suite. Dec 26 10:53:52.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:53:52.754: INFO: namespace: e2e-tests-init-container-j2d67, resource: bindings, ignored listing per whitelist Dec 26 10:53:52.754: INFO: namespace e2e-tests-init-container-j2d67 deletion completed in 6.249912903s • [SLOW TEST:24.089 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:53:52.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-016b6256-27ce-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 10:53:52.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-s2kbs" to be "success or failure" Dec 26 10:53:52.950: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.446615ms Dec 26 10:53:55.053: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114804433s Dec 26 10:53:57.095: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156154456s Dec 26 10:53:59.211: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27281146s Dec 26 10:54:01.491: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.5525134s Dec 26 10:54:03.506: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.566997832s STEP: Saw pod success Dec 26 10:54:03.506: INFO: Pod "pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 10:54:03.514: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 26 10:54:03.619: INFO: Waiting for pod pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005 to disappear Dec 26 10:54:04.260: INFO: Pod pod-configmaps-016c8c11-27ce-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:54:04.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-s2kbs" for this suite. Dec 26 10:54:10.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:54:10.831: INFO: namespace: e2e-tests-configmap-s2kbs, resource: bindings, ignored listing per whitelist Dec 26 10:54:10.839: INFO: namespace e2e-tests-configmap-s2kbs deletion completed in 6.566962245s • [SLOW TEST:18.084 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:54:10.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 26 10:54:29.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:29.458: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:31.458: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:31.955: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:33.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:33.631: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:35.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:35.490: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:37.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:37.473: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:39.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:39.468: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:41.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:41.472: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 10:54:43.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 10:54:43.475: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 10:54:43.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-grsq5" for this suite. Dec 26 10:55:07.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 10:55:07.560: INFO: namespace: e2e-tests-container-lifecycle-hook-grsq5, resource: bindings, ignored listing per whitelist Dec 26 10:55:07.685: INFO: namespace e2e-tests-container-lifecycle-hook-grsq5 deletion completed in 24.202183192s • [SLOW TEST:56.846 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 10:55:07.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-q2lhs [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-q2lhs STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-q2lhs Dec 26 10:55:07.938: INFO: Found 0 stateful pods, waiting for 1 Dec 26 10:55:17.955: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 26 10:55:17.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 10:55:19.035: INFO: stderr: "" Dec 26 10:55:19.035: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 10:55:19.035: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 26 10:55:19.049: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 26 10:55:29.063: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 26 10:55:29.063: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 10:55:29.210: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:55:29.210: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:55:29.210: INFO: Dec 26 10:55:29.210: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 26 10:55:30.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.872497768s Dec 26 10:55:32.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.287245047s Dec 26 10:55:33.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.776057553s Dec 26 10:55:34.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.737485444s Dec 26 10:55:35.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.700969319s Dec 26 10:55:37.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.664320877s Dec 26 10:55:38.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 531.559264ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-q2lhs Dec 26 10:55:39.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:55:40.709: INFO: stderr: "" Dec 26 10:55:40.709: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 26 10:55:40.709: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 26 10:55:40.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:55:41.244: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 26 10:55:41.244: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 26 10:55:41.244: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 26 10:55:41.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:55:41.663: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 26 10:55:41.663: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 26 10:55:41.663: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 26 10:55:41.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 10:55:41.803: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 10:55:41.803: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 10:55:51.837: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 10:55:51.837: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 10:55:51.837: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 26 10:55:51.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 10:55:52.628: INFO: stderr: "" Dec 26 10:55:52.628: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 10:55:52.628: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 26 10:55:52.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 10:55:53.433: INFO: stderr: "" Dec 26 10:55:53.433: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 10:55:53.433: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 26 10:55:53.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 10:55:54.233: INFO: stderr: "" Dec 26 10:55:54.233: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 10:55:54.233: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 26 10:55:54.233: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 10:55:54.257: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 26 10:56:04.286: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 26 10:56:04.286: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 26 10:56:04.286: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 26 10:56:04.376: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:04.376: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:04.376: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:04.376: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:04.377: INFO: Dec 26 10:56:04.377: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:05.393: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:05.393: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:05.394: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:05.394: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:05.394: INFO: Dec 26 10:56:05.394: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:06.589: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:06.589: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:06.589: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:06.589: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:06.589: INFO: Dec 26 10:56:06.589: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:07.614: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:07.615: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:07.615: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:07.615: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:07.615: INFO: Dec 26 10:56:07.615: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:09.052: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:09.052: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:09.052: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:09.052: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:09.052: INFO: Dec 26 10:56:09.052: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:10.076: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:10.076: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:10.076: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:10.076: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:10.076: INFO: Dec 26 10:56:10.076: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:11.424: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:11.424: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:11.425: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:11.425: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:11.425: INFO: Dec 26 10:56:11.425: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:12.451: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:12.451: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:12.451: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:12.451: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:12.451: INFO: Dec 26 10:56:12.451: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 26 10:56:13.492: INFO: POD NODE PHASE GRACE CONDITIONS Dec 26 10:56:13.492: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:08 +0000 UTC }] Dec 26 10:56:13.492: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 10:55:29 +0000 UTC }] Dec 26 10:56:13.492: INFO: Dec 26 10:56:13.492: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-q2lhs Dec 26 10:56:14.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:56:14.692: INFO: rc: 1 Dec 26 10:56:14.693: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001adb020 exit status 1 true [0xc000550ce8 0xc000550d10 0xc000550e18] [0xc000550ce8 0xc000550d10 0xc000550e18] [0xc000550cf8 0xc000550de0] [0x935700 0x935700] 0xc001c5e480 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 26 10:56:24.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:56:24.849: INFO: rc: 1 Dec 26 10:56:24.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870240 exit status 1 true [0xc001fd2028 0xc001fd2040 0xc001fd2058] [0xc001fd2028 0xc001fd2040 0xc001fd2058] [0xc001fd2038 0xc001fd2050] [0x935700 0x935700] 0xc00188a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:56:34.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:56:34.958: INFO: rc: 1 Dec 26 10:56:34.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce1b0 exit status 1 true [0xc001532000 0xc001532018 0xc001532030] [0xc001532000 0xc001532018 0xc001532030] [0xc001532010 0xc001532028] [0x935700 0x935700] 0xc0014aa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:56:44.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:56:45.072: INFO: rc: 1 Dec 26 10:56:45.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870390 exit status 1 true [0xc001fd2060 0xc001fd2078 0xc001fd2090] [0xc001fd2060 0xc001fd2078 0xc001fd2090] [0xc001fd2070 0xc001fd2088] [0x935700 0x935700] 0xc00188a720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:56:55.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:56:55.225: INFO: rc: 1 Dec 26 10:56:55.225: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce2d0 exit status 1 true [0xc001532038 0xc001532050 0xc001532078] [0xc001532038 0xc001532050 0xc001532078] [0xc001532048 0xc001532070] [0x935700 0x935700] 0xc0014aa840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:05.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:05.400: INFO: rc: 1 Dec 26 10:57:05.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce420 exit status 1 true [0xc001532080 0xc001532098 0xc0015320b0] [0xc001532080 0xc001532098 0xc0015320b0] [0xc001532090 0xc0015320a8] [0x935700 0x935700] 0xc0014aaae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:15.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:15.555: INFO: rc: 1 Dec 26 10:57:15.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce540 exit status 1 true [0xc0015320b8 0xc0015320d0 0xc0015320e8] [0xc0015320b8 0xc0015320d0 0xc0015320e8] [0xc0015320c8 0xc0015320e0] [0x935700 0x935700] 0xc0014aad80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:25.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:25.693: INFO: rc: 1 Dec 26 10:57:25.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce660 exit status 1 true [0xc0015320f0 0xc001532108 0xc001532120] [0xc0015320f0 0xc001532108 0xc001532120] [0xc001532100 0xc001532118] [0x935700 0x935700] 0xc0014ab020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:35.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:35.881: INFO: rc: 1 Dec 26 10:57:35.881: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018704e0 exit status 1 true [0xc001fd2098 0xc001fd20b0 0xc001fd20c8] [0xc001fd2098 0xc001fd20b0 0xc001fd20c8] [0xc001fd20a8 0xc001fd20c0] [0x935700 0x935700] 0xc00188a9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:45.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:46.023: INFO: rc: 1 Dec 26 10:57:46.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870630 exit status 1 true [0xc001fd20d0 0xc001fd20e8 0xc001fd2100] [0xc001fd20d0 0xc001fd20e8 0xc001fd2100] [0xc001fd20e0 0xc001fd20f8] [0x935700 0x935700] 0xc00188ac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:57:56.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:57:56.155: INFO: rc: 1 Dec 26 10:57:56.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870750 exit status 1 true [0xc001fd2108 0xc001fd2120 0xc001fd2138] [0xc001fd2108 0xc001fd2120 0xc001fd2138] [0xc001fd2118 0xc001fd2130] [0x935700 0x935700] 0xc00188b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:06.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:06.296: INFO: rc: 1 Dec 26 10:58:06.296: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001230c30 exit status 1 true [0xc0002521a8 0xc0002524c8 0xc000252540] [0xc0002521a8 0xc0002524c8 0xc000252540] [0xc000252380 0xc000252538] [0x935700 0x935700] 0xc00193af00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:16.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:16.497: INFO: rc: 1 Dec 26 10:58:16.498: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870180 exit status 1 true [0xc0000ea0f8 0xc001fd2008 0xc001fd2020] [0xc0000ea0f8 0xc001fd2008 0xc001fd2020] [0xc001fd2000 0xc001fd2018] [0x935700 0x935700] 0xc00188a420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:26.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:26.675: INFO: rc: 1 Dec 26 10:58:26.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce1e0 exit status 1 true [0xc001532000 0xc001532018 0xc001532030] [0xc001532000 0xc001532018 0xc001532030] [0xc001532010 0xc001532028] [0x935700 0x935700] 0xc0014aa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:36.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:36.782: INFO: rc: 1 Dec 26 10:58:36.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce330 exit status 1 true [0xc001532038 0xc001532050 0xc001532078] [0xc001532038 0xc001532050 0xc001532078] [0xc001532048 0xc001532070] [0x935700 0x935700] 0xc0014aa840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:46.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:46.939: INFO: rc: 1 Dec 26 10:58:46.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001230150 exit status 1 true [0xc0002520b0 0xc0002521e0 0xc000252518] [0xc0002520b0 0xc0002521e0 0xc000252518] [0xc0002521a8 0xc0002524c8] [0x935700 0x935700] 0xc00193a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:58:56.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:58:57.085: INFO: rc: 1 Dec 26 10:58:57.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870300 exit status 1 true [0xc001fd2028 0xc001fd2040 0xc001fd2058] [0xc001fd2028 0xc001fd2040 0xc001fd2058] [0xc001fd2038 0xc001fd2050] [0x935700 0x935700] 0xc00188a6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:07.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:07.236: INFO: rc: 1 Dec 26 10:59:07.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870480 exit status 1 true [0xc001fd2060 0xc001fd2078 0xc001fd2090] [0xc001fd2060 0xc001fd2078 0xc001fd2090] [0xc001fd2070 0xc001fd2088] [0x935700 0x935700] 0xc00188a960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:17.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:17.411: INFO: rc: 1 Dec 26 10:59:17.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870600 exit status 1 true [0xc001fd2098 0xc001fd20b0 0xc001fd20c8] [0xc001fd2098 0xc001fd20b0 0xc001fd20c8] [0xc001fd20a8 0xc001fd20c0] [0x935700 0x935700] 0xc00188ac00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:27.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:27.613: INFO: rc: 1 Dec 26 10:59:27.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001870780 exit status 1 true [0xc001fd20d0 0xc001fd20e8 0xc001fd2100] [0xc001fd20d0 0xc001fd20e8 0xc001fd2100] [0xc001fd20e0 0xc001fd20f8] [0x935700 0x935700] 0xc00188b080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:37.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:37.769: INFO: rc: 1 Dec 26 10:59:37.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018708d0 exit status 1 true [0xc001fd2108 0xc001fd2120 0xc001fd2138] [0xc001fd2108 0xc001fd2120 0xc001fd2138] [0xc001fd2118 0xc001fd2130] [0x935700 0x935700] 0xc00188b500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:47.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:48.010: INFO: rc: 1 Dec 26 10:59:48.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014ce4b0 exit status 1 true [0xc001532080 0xc001532098 0xc0015320b0] [0xc001532080 0xc001532098 0xc0015320b0] [0xc001532090 0xc0015320a8] [0x935700 0x935700] 0xc0014aaae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 10:59:58.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 10:59:58.157: INFO: rc: 1 Dec 26 10:59:58.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0012302d0 exit status 1 true [0xc000252538 0xc0002525f0 0xc000252720] [0xc000252538 0xc0002525f0 0xc000252720] [0xc000252560 0xc0002526a8] [0x935700 0x935700] 0xc00193aea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:08.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:08.300: INFO: rc: 1 Dec 26 11:00:08.300: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ada120 exit status 1 true [0xc000550bd0 0xc000550ca0 0xc000550cf0] [0xc000550bd0 0xc000550ca0 0xc000550cf0] [0xc000550c60 0xc000550ce8] [0x935700 0x935700] 0xc001c5e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:18.462: INFO: rc: 1 Dec 26 11:00:18.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018701b0 exit status 1 true [0xc0000ea2a0 0xc001fd2008 0xc001fd2020] [0xc0000ea2a0 0xc001fd2008 0xc001fd2020] [0xc001fd2000 0xc001fd2018] [0x935700 0x935700] 0xc00188a420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:28.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:28.806: INFO: rc: 1 Dec 26 11:00:28.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001230180 exit status 1 true [0xc000550bd0 0xc000550ca0 0xc000550cf0] [0xc000550bd0 0xc000550ca0 0xc000550cf0] [0xc000550c60 0xc000550ce8] [0x935700 0x935700] 0xc001c5e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:38.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:38.944: INFO: rc: 1 Dec 26 11:00:38.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001230330 exit status 1 true [0xc000550cf8 0xc000550de0 0xc000550e38] [0xc000550cf8 0xc000550de0 0xc000550e38] [0xc000550d30 0xc000550e20] [0x935700 0x935700] 0xc001c5e4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:49.123: INFO: rc: 1 Dec 26 11:00:49.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001230450 exit status 1 true [0xc000550e48 0xc000550ed8 0xc000550f60] [0xc000550e48 0xc000550ed8 0xc000550f60] [0xc000550e78 0xc000550f40] [0x935700 0x935700] 0xc001c5e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:00:59.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:00:59.236: INFO: rc: 1 Dec 26 11:00:59.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0012305d0 exit status 1 true [0xc000550f80 0xc000550ff0 0xc000551060] [0xc000550f80 0xc000550ff0 0xc000551060] [0xc000550fe0 0xc000551038] [0x935700 0x935700] 0xc001c5ea20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:01:09.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:01:09.438: INFO: rc: 1 Dec 26 11:01:09.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0012306f0 exit status 1 true [0xc0005510a0 0xc000551108 0xc000551128] [0xc0005510a0 0xc000551108 0xc000551128] [0xc0005510e8 0xc000551120] [0x935700 0x935700] 0xc001c5ecc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 26 11:01:19.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q2lhs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 11:01:19.582: INFO: rc: 1 Dec 26 11:01:19.583: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 26 11:01:19.583: INFO: Scaling statefulset ss to 0 Dec 26 11:01:19.602: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 26 11:01:19.607: INFO: Deleting all statefulset in ns e2e-tests-statefulset-q2lhs Dec 26 11:01:19.628: INFO: Scaling statefulset ss to 0 Dec 26 11:01:19.696: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 11:01:19.707: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:01:19.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-q2lhs" for this suite. Dec 26 11:01:25.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:01:26.005: INFO: namespace: e2e-tests-statefulset-q2lhs, resource: bindings, ignored listing per whitelist Dec 26 11:01:26.075: INFO: namespace e2e-tests-statefulset-q2lhs deletion completed in 6.304739655s • [SLOW TEST:378.389 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:01:26.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:01:26.296: INFO: Creating deployment "nginx-deployment" Dec 26 11:01:26.307: INFO: Waiting for observed generation 1 Dec 26 11:01:28.553: INFO: Waiting for all required pods to come up Dec 26 11:01:28.952: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 26 11:02:01.947: INFO: Waiting for deployment "nginx-deployment" to complete Dec 26 11:02:01.960: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 26 11:02:01.987: INFO: Updating deployment nginx-deployment Dec 26 11:02:01.987: INFO: Waiting for observed generation 2 Dec 26 11:02:04.456: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 26 11:02:04.797: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 26 11:02:05.939: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 26 11:02:06.903: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 26 11:02:06.903: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 26 11:02:08.517: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 26 11:02:09.470: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 26 11:02:09.471: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 26 11:02:09.959: INFO: Updating deployment nginx-deployment Dec 26 11:02:09.959: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 26 11:02:11.009: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 26 11:02:14.726: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 26 11:02:16.926: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c5mv/deployments/nginx-deployment,UID:0fae6ac7-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114602,Generation:3,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-26 11:02:06 +0000 UTC 2019-12-26 11:01:26 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-26 11:02:11 +0000 UTC 2019-12-26 11:02:11 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 26 11:02:18.348: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c5mv/replicasets/nginx-deployment-5c98f8fb5,UID:24f75684-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114617,Generation:3,CreationTimestamp:2019-12-26 11:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0fae6ac7-27cf-11ea-a994-fa163e34d433 0xc001e76277 0xc001e76278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 11:02:18.349: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 26 11:02:18.350: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c5mv/replicasets/nginx-deployment-85ddf47c5d,UID:0fb39d4f-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114596,Generation:3,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0fae6ac7-27cf-11ea-a994-fa163e34d433 0xc001e76517 0xc001e76518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 26 11:02:19.427: INFO: Pod "nginx-deployment-5c98f8fb5-7mzj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7mzj2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-7mzj2,UID:26694de5-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114540,Generation:0,CreationTimestamp:2019-12-26 11:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77297 0xc001e77298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77300} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.428: INFO: Pod "nginx-deployment-5c98f8fb5-bd65j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bd65j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-bd65j,UID:2afb8855-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114594,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e773e7 0xc001e773e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77450} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.428: INFO: Pod "nginx-deployment-5c98f8fb5-cdsmq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cdsmq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-cdsmq,UID:2514bbbe-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114537,Generation:0,CreationTimestamp:2019-12-26 11:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e774e7 0xc001e774e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77550} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.428: INFO: Pod "nginx-deployment-5c98f8fb5-clrl7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-clrl7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-clrl7,UID:2b215306-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114601,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77637 0xc001e77638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e776a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e776c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.429: INFO: Pod "nginx-deployment-5c98f8fb5-df2x7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-df2x7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-df2x7,UID:2b20eea0-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114599,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77737 0xc001e77738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e777a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e777c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.429: INFO: Pod "nginx-deployment-5c98f8fb5-fz2cc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fz2cc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-fz2cc,UID:2b215239-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114604,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77837 0xc001e77838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e778a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e778c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.430: INFO: Pod "nginx-deployment-5c98f8fb5-h4cl6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h4cl6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-h4cl6,UID:2a5a9458-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114577,Generation:0,CreationTimestamp:2019-12-26 11:02:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77937 0xc001e77938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e779a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e779c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.430: INFO: Pod "nginx-deployment-5c98f8fb5-hcnp4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hcnp4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-hcnp4,UID:2afb8d3c-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114593,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77a47 0xc001e77a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.431: INFO: Pod "nginx-deployment-5c98f8fb5-mj7v6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mj7v6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-mj7v6,UID:2b5bfc8b-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114605,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77b47 0xc001e77b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.431: INFO: Pod "nginx-deployment-5c98f8fb5-p9bhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p9bhc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-p9bhc,UID:2b20e77f-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114600,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77c77 0xc001e77c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.432: INFO: Pod "nginx-deployment-5c98f8fb5-sv87r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sv87r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-sv87r,UID:26a0363c-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114548,Generation:0,CreationTimestamp:2019-12-26 11:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77d87 0xc001e77d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.432: INFO: Pod "nginx-deployment-5c98f8fb5-twb45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-twb45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-twb45,UID:2516324f-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114529,Generation:0,CreationTimestamp:2019-12-26 11:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001e77f27 0xc001e77f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e77f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e77fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.433: INFO: Pod "nginx-deployment-5c98f8fb5-vkj7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vkj7v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-5c98f8fb5-vkj7v,UID:25104066-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114519,Generation:0,CreationTimestamp:2019-12-26 11:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 24f75684-27cf-11ea-a994-fa163e34d433 0xc001cf6297 0xc001cf6298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf6360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf6380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.433: INFO: Pod "nginx-deployment-85ddf47c5d-2b6gx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2b6gx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-2b6gx,UID:0fef88dc-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114472,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf6587 0xc001cf6588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf6610} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf6630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f7a2c3352a847564d3e55e825231b3ce3df7d4092c724e6fe134122c4d5d1371}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.434: INFO: Pod "nginx-deployment-85ddf47c5d-2hqw9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2hqw9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-2hqw9,UID:2af38e49-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114585,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf66f7 0xc001cf66f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf6760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf68b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.434: INFO: Pod "nginx-deployment-85ddf47c5d-7grqh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7grqh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-7grqh,UID:2a581af4-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114575,Generation:0,CreationTimestamp:2019-12-26 11:02:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf6af7 0xc001cf6af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf6b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf6b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.435: INFO: Pod "nginx-deployment-85ddf47c5d-9n9xh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9n9xh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-9n9xh,UID:0fcf94d1-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114475,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf6bf7 0xc001cf6bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf6c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf6c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b5bdce3388b5d54e7323ea557ebb8b2de2910fc476245aff8c6233afca21291e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.435: INFO: Pod "nginx-deployment-85ddf47c5d-b5btk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5btk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-b5btk,UID:0fcf3531-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114460,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7407 0xc001cf7408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://053422b021b19a175010f37e7c33249f1bbb016728187aa96cf7b0280ff6a5f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.436: INFO: Pod "nginx-deployment-85ddf47c5d-cls7k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cls7k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-cls7k,UID:0fc305d0-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114423,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf75f7 0xc001cf75f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7750} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-26 11:01:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://046e4c6e4b6674af28c71dfe4e937c1b4e39729e86939b62f02c6d1e218d7023}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.436: INFO: Pod "nginx-deployment-85ddf47c5d-fws55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fws55,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-fws55,UID:2af34892-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114580,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7857 0xc001cf7858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf78e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.437: INFO: Pod "nginx-deployment-85ddf47c5d-fxf5v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fxf5v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-fxf5v,UID:2af3880d-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114584,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7977 0xc001cf7978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf79e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.437: INFO: Pod "nginx-deployment-85ddf47c5d-g9jq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g9jq2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-g9jq2,UID:2af3e2eb-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114582,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7a77 0xc001cf7a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.438: INFO: Pod "nginx-deployment-85ddf47c5d-gsm6j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gsm6j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-gsm6j,UID:2a58ccb7-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114576,Generation:0,CreationTimestamp:2019-12-26 11:02:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7b87 0xc001cf7b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.438: INFO: Pod "nginx-deployment-85ddf47c5d-hdb9s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hdb9s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-hdb9s,UID:0fd05d6b-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114454,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7c87 0xc001cf7c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11bd547553a1d6333c31224b66e4078963e068f17879fb0324801fe59c36f34c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.439: INFO: Pod "nginx-deployment-85ddf47c5d-hh62s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hh62s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-hh62s,UID:29e6e444-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114610,Generation:0,CreationTimestamp:2019-12-26 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7de7 0xc001cf7de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.439: INFO: Pod "nginx-deployment-85ddf47c5d-km5rv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-km5rv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-km5rv,UID:29e6fd1f-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114620,Generation:0,CreationTimestamp:2019-12-26 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc001cf7f37 0xc001cf7f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cf7fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cf7fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.440: INFO: Pod "nginx-deployment-85ddf47c5d-lp4gf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lp4gf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-lp4gf,UID:29df842b-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114595,Generation:0,CreationTimestamp:2019-12-26 11:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd80b7 0xc000bd80b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd8270} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd8290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 11:02:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.440: INFO: Pod "nginx-deployment-85ddf47c5d-m228c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m228c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-m228c,UID:2af6dbd9-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114586,Generation:0,CreationTimestamp:2019-12-26 11:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd84e7 0xc000bd84e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd8790} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd8890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.441: INFO: Pod "nginx-deployment-85ddf47c5d-nm5ft" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nm5ft,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-nm5ft,UID:0fc076c6-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114466,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd8a97 0xc000bd8a98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd8b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd8bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-26 11:01:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bedf78f0a29aed9e3f4285d950678b2032b44f2343de1ce3a3b666a3304616e2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.441: INFO: Pod "nginx-deployment-85ddf47c5d-pxrck" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pxrck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-pxrck,UID:0fcf8640-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114457,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd8f97 0xc000bd8f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd9090} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd90b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://667d831ffd3677ca96a44cfbcd49714ba3119bc3bb6b1ea260f90b682ba6ccec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.442: INFO: Pod "nginx-deployment-85ddf47c5d-rx6pr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rx6pr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-rx6pr,UID:0fefa486-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114463,Generation:0,CreationTimestamp:2019-12-26 11:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd92b7 0xc000bd92b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd93d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd9460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:01:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-26 11:01:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:01:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0675a9b1dc5e65a455a3989614c28b879add9f043e76269562d6669ad7e3f6cd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.442: INFO: Pod "nginx-deployment-85ddf47c5d-t56jw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t56jw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-t56jw,UID:2a592a26-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114571,Generation:0,CreationTimestamp:2019-12-26 11:02:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd9837 0xc000bd9838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd98f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd9960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 11:02:19.443: INFO: Pod "nginx-deployment-85ddf47c5d-tltj6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tltj6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7c5mv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c5mv/pods/nginx-deployment-85ddf47c5d-tltj6,UID:2a58ede9-27cf-11ea-a994-fa163e34d433,ResourceVersion:16114572,Generation:0,CreationTimestamp:2019-12-26 11:02:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0fb39d4f-27cf-11ea-a994-fa163e34d433 0xc000bd9a77 0xc000bd9a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzdmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzdmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qzdmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bd9ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bd9b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:02:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:02:19.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7c5mv" for this suite. Dec 26 11:03:45.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:03:45.602: INFO: namespace: e2e-tests-deployment-7c5mv, resource: bindings, ignored listing per whitelist Dec 26 11:03:45.785: INFO: namespace e2e-tests-deployment-7c5mv deletion completed in 1m24.944439831s • [SLOW TEST:139.710 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:03:45.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-63a01611-27cf-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 11:03:47.209: INFO: Waiting up to 5m0s for pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-5mfrm" to be "success or failure" Dec 26 11:03:47.224: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.284837ms Dec 26 11:03:49.696: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487627085s Dec 26 11:03:51.710: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500818338s Dec 26 11:03:53.734: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524977307s Dec 26 11:03:55.977: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768577733s Dec 26 11:03:58.219: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.009988758s Dec 26 11:04:00.911: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.701721972s Dec 26 11:04:02.945: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.736654735s STEP: Saw pod success Dec 26 11:04:02.946: INFO: Pod "pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:04:02.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005 container secret-env-test: STEP: delete the pod Dec 26 11:04:03.304: INFO: Waiting for pod pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005 to disappear Dec 26 11:04:03.321: INFO: Pod pod-secrets-63a195f2-27cf-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:04:03.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5mfrm" for this suite. Dec 26 11:04:09.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:04:09.501: INFO: namespace: e2e-tests-secrets-5mfrm, resource: bindings, ignored listing per whitelist Dec 26 11:04:09.501: INFO: namespace e2e-tests-secrets-5mfrm deletion completed in 6.173993697s • [SLOW TEST:23.714 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:04:09.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 26 11:07:16.176: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:16.241: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:18.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:18.258: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:20.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:20.267: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:22.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:22.284: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:24.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:24.253: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:26.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:26.261: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:28.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:28.259: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:30.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:30.264: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:32.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:32.252: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:34.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:34.258: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:36.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:36.257: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:38.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:38.250: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:40.243: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:40.322: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:42.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:42.257: INFO: Pod pod-with-poststart-exec-hook still exists Dec 26 11:07:44.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 26 11:07:44.254: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:07:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9nnwn" for this suite. Dec 26 11:08:08.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:08:08.361: INFO: namespace: e2e-tests-container-lifecycle-hook-9nnwn, resource: bindings, ignored listing per whitelist Dec 26 11:08:08.491: INFO: namespace e2e-tests-container-lifecycle-hook-9nnwn deletion completed in 24.230220174s • [SLOW TEST:238.990 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:08:08.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-ff96e598-27cf-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:08:08.844: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-mmz74" to be "success or failure" Dec 26 11:08:09.032: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 187.884189ms Dec 26 11:08:11.048: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204191706s Dec 26 11:08:13.079: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234569513s Dec 26 11:08:15.097: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253197902s Dec 26 11:08:17.143: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299110693s Dec 26 11:08:19.195: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.351326163s Dec 26 11:08:21.215: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.370631458s STEP: Saw pod success Dec 26 11:08:21.215: INFO: Pod "pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:08:21.223: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 26 11:08:21.352: INFO: Waiting for pod pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005 to disappear Dec 26 11:08:21.394: INFO: Pod pod-projected-configmaps-ff982b80-27cf-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:08:21.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mmz74" for this suite. Dec 26 11:08:29.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:08:29.483: INFO: namespace: e2e-tests-projected-mmz74, resource: bindings, ignored listing per whitelist Dec 26 11:08:29.576: INFO: namespace e2e-tests-projected-mmz74 deletion completed in 8.166657154s • [SLOW TEST:21.083 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:08:29.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0c2beee0-27d0-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 11:08:30.081: INFO: Waiting up to 5m0s for pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-5wmqj" to be "success or failure" Dec 26 11:08:30.105: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.071649ms Dec 26 11:08:32.121: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040520093s Dec 26 11:08:34.152: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07138123s Dec 26 11:08:36.176: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09521384s Dec 26 11:08:38.201: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120267659s Dec 26 11:08:40.219: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138559608s STEP: Saw pod success Dec 26 11:08:40.220: INFO: Pod "pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:08:40.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 26 11:08:40.486: INFO: Waiting for pod pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005 to disappear Dec 26 11:08:40.506: INFO: Pod pod-secrets-0c44090d-27d0-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:08:40.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5wmqj" for this suite. Dec 26 11:08:46.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:08:46.816: INFO: namespace: e2e-tests-secrets-5wmqj, resource: bindings, ignored listing per whitelist Dec 26 11:08:46.827: INFO: namespace e2e-tests-secrets-5wmqj deletion completed in 6.266933923s STEP: Destroying namespace "e2e-tests-secret-namespace-n7b8d" for this suite. Dec 26 11:08:52.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:08:53.004: INFO: namespace: e2e-tests-secret-namespace-n7b8d, resource: bindings, ignored listing per whitelist Dec 26 11:08:53.127: INFO: namespace e2e-tests-secret-namespace-n7b8d deletion completed in 6.300061399s • [SLOW TEST:23.551 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:08:53.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 26 11:08:53.390: INFO: Number of nodes with available pods: 0 Dec 26 11:08:53.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:54.414: INFO: Number of nodes with available pods: 0 Dec 26 11:08:54.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:55.429: INFO: Number of nodes with available pods: 0 Dec 26 11:08:55.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:56.425: INFO: Number of nodes with available pods: 0 Dec 26 11:08:56.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:57.427: INFO: Number of nodes with available pods: 0 Dec 26 11:08:57.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:58.425: INFO: Number of nodes with available pods: 0 Dec 26 11:08:58.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:08:59.977: INFO: Number of nodes with available pods: 0 Dec 26 11:08:59.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:00.526: INFO: Number of nodes with available pods: 0 Dec 26 11:09:00.526: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:01.740: INFO: Number of nodes with available pods: 0 Dec 26 11:09:01.740: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:02.499: INFO: Number of nodes with available pods: 0 Dec 26 11:09:02.500: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:03.488: INFO: Number of nodes with available pods: 0 Dec 26 11:09:03.488: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:04.430: INFO: Number of nodes with available pods: 1 Dec 26 11:09:04.430: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 26 11:09:04.611: INFO: Number of nodes with available pods: 0 Dec 26 11:09:04.611: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:05.649: INFO: Number of nodes with available pods: 0 Dec 26 11:09:05.649: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:06.669: INFO: Number of nodes with available pods: 0 Dec 26 11:09:06.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:07.820: INFO: Number of nodes with available pods: 0 Dec 26 11:09:07.820: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:08.639: INFO: Number of nodes with available pods: 0 Dec 26 11:09:08.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:09.882: INFO: Number of nodes with available pods: 0 Dec 26 11:09:09.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:10.633: INFO: Number of nodes with available pods: 0 Dec 26 11:09:10.633: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:11.636: INFO: Number of nodes with available pods: 0 Dec 26 11:09:11.636: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:12.631: INFO: Number of nodes with available pods: 0 Dec 26 11:09:12.631: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:13.642: INFO: Number of nodes with available pods: 0 Dec 26 11:09:13.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:14.814: INFO: Number of nodes with available pods: 0 Dec 26 11:09:14.814: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:15.631: INFO: Number of nodes with available pods: 0 Dec 26 11:09:15.631: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:16.701: INFO: Number of nodes with available pods: 0 Dec 26 11:09:16.701: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:17.637: INFO: Number of nodes with available pods: 0 Dec 26 11:09:17.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:18.706: INFO: Number of nodes with available pods: 0 Dec 26 11:09:18.706: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:19.648: INFO: Number of nodes with available pods: 0 Dec 26 11:09:19.648: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:20.641: INFO: Number of nodes with available pods: 0 Dec 26 11:09:20.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:21.651: INFO: Number of nodes with available pods: 0 Dec 26 11:09:21.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:22.734: INFO: Number of nodes with available pods: 0 Dec 26 11:09:22.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:24.389: INFO: Number of nodes with available pods: 0 Dec 26 11:09:24.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:24.641: INFO: Number of nodes with available pods: 0 Dec 26 11:09:24.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:25.757: INFO: Number of nodes with available pods: 0 Dec 26 11:09:25.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:26.645: INFO: Number of nodes with available pods: 0 Dec 26 11:09:26.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:27.641: INFO: Number of nodes with available pods: 0 Dec 26 11:09:27.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:29.300: INFO: Number of nodes with available pods: 0 Dec 26 11:09:29.301: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:29.749: INFO: Number of nodes with available pods: 0 Dec 26 11:09:29.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:30.744: INFO: Number of nodes with available pods: 0 Dec 26 11:09:30.744: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:31.638: INFO: Number of nodes with available pods: 0 Dec 26 11:09:31.638: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:32.651: INFO: Number of nodes with available pods: 0 Dec 26 11:09:32.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:09:33.651: INFO: Number of nodes with available pods: 1 Dec 26 11:09:33.651: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p7ld4, will wait for the garbage collector to delete the pods Dec 26 11:09:33.746: INFO: Deleting DaemonSet.extensions daemon-set took: 28.899638ms Dec 26 11:09:33.846: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.765843ms Dec 26 11:09:42.656: INFO: Number of nodes with available pods: 0 Dec 26 11:09:42.656: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 11:09:42.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p7ld4/daemonsets","resourceVersion":"16115525"},"items":null} Dec 26 11:09:42.663: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p7ld4/pods","resourceVersion":"16115525"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:09:42.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-p7ld4" for this suite. Dec 26 11:09:49.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:09:49.384: INFO: namespace: e2e-tests-daemonsets-p7ld4, resource: bindings, ignored listing per whitelist Dec 26 11:09:49.491: INFO: namespace e2e-tests-daemonsets-p7ld4 deletion completed in 6.818824981s • [SLOW TEST:56.364 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:09:49.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 26 11:09:58.057: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:10:23.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-lgmz8" for this suite. Dec 26 11:10:29.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:10:29.678: INFO: namespace: e2e-tests-namespaces-lgmz8, resource: bindings, ignored listing per whitelist Dec 26 11:10:29.764: INFO: namespace e2e-tests-namespaces-lgmz8 deletion completed in 6.467588355s STEP: Destroying namespace "e2e-tests-nsdeletetest-ph7zb" for this suite. Dec 26 11:10:29.770: INFO: Namespace e2e-tests-nsdeletetest-ph7zb was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-t4fql" for this suite. Dec 26 11:10:35.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:10:35.911: INFO: namespace: e2e-tests-nsdeletetest-t4fql, resource: bindings, ignored listing per whitelist Dec 26 11:10:36.046: INFO: namespace e2e-tests-nsdeletetest-t4fql deletion completed in 6.27626975s • [SLOW TEST:46.554 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:10:36.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:10:36.415: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:10:47.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fxdkn" for this suite. Dec 26 11:11:35.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:11:35.317: INFO: namespace: e2e-tests-pods-fxdkn, resource: bindings, ignored listing per whitelist Dec 26 11:11:35.363: INFO: namespace e2e-tests-pods-fxdkn deletion completed in 48.209187217s • [SLOW TEST:59.317 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:11:35.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 26 11:11:35.609: INFO: PodSpec: initContainers in spec.initContainers Dec 26 11:12:48.756: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7adb455e-27d0-11ea-948a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-c28nb", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-c28nb/pods/pod-init-7adb455e-27d0-11ea-948a-0242ac110005", UID:"7adf061e-27d0-11ea-a994-fa163e34d433", ResourceVersion:"16115872", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712955495, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"609303670"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rw8nf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022ef4c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rw8nf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rw8nf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rw8nf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dabaa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c5e120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dabc00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dabc20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001dabc28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dabc2c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712955496, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712955496, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712955496, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712955495, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000ee4340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000220a80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000220af0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5f3f828e9a84d415b11aebe39f6bfcfe4ddab736efe4398204ecbe0d40b84b66"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ee4380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ee4360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:12:48.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c28nb" for this suite. Dec 26 11:13:12.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:13:13.028: INFO: namespace: e2e-tests-init-container-c28nb, resource: bindings, ignored listing per whitelist Dec 26 11:13:13.165: INFO: namespace e2e-tests-init-container-c28nb deletion completed in 24.31766827s • [SLOW TEST:97.802 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:13:13.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b51f04c3-27d0-11ea-948a-0242ac110005 STEP: Creating secret with name s-test-opt-upd-b51f06c9-27d0-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b51f04c3-27d0-11ea-948a-0242ac110005 STEP: Updating secret s-test-opt-upd-b51f06c9-27d0-11ea-948a-0242ac110005 STEP: Creating secret with name s-test-opt-create-b51f0705-27d0-11ea-948a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:14:44.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vkqx2" for this suite. Dec 26 11:15:08.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:15:08.325: INFO: namespace: e2e-tests-projected-vkqx2, resource: bindings, ignored listing per whitelist Dec 26 11:15:08.556: INFO: namespace e2e-tests-projected-vkqx2 deletion completed in 24.311667614s • [SLOW TEST:115.391 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:15:08.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 26 11:15:21.908: INFO: Successfully updated pod "labelsupdatefa155096-27d0-11ea-948a-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:15:24.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xwg67" for this suite. Dec 26 11:15:48.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:15:48.203: INFO: namespace: e2e-tests-projected-xwg67, resource: bindings, ignored listing per whitelist Dec 26 11:15:48.364: INFO: namespace e2e-tests-projected-xwg67 deletion completed in 24.239061414s • [SLOW TEST:39.805 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:15:48.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9x8w4 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 11:15:48.596: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 11:16:20.911: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-9x8w4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 11:16:20.911: INFO: >>> kubeConfig: /root/.kube/config Dec 26 11:16:21.373: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:16:21.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9x8w4" for this suite. Dec 26 11:16:47.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:16:47.793: INFO: namespace: e2e-tests-pod-network-test-9x8w4, resource: bindings, ignored listing per whitelist Dec 26 11:16:47.808: INFO: namespace e2e-tests-pod-network-test-9x8w4 deletion completed in 26.405737917s • [SLOW TEST:59.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:16:47.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 26 11:16:58.163: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-351a2276-27d1-11ea-948a-0242ac110005,GenerateName:,Namespace:e2e-tests-events-25x8v,SelfLink:/api/v1/namespaces/e2e-tests-events-25x8v/pods/send-events-351a2276-27d1-11ea-948a-0242ac110005,UID:351bacb9-27d1-11ea-a994-fa163e34d433,ResourceVersion:16116312,Generation:0,CreationTimestamp:2019-12-26 11:16:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 77486628,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5jpg5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5jpg5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5jpg5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b33a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b33a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:16:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:16:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:16:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:16:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-26 11:16:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-26 11:16:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9bb3fcbef3dd69b87076c989a5812de044ee49424882df02960b17f641feb53c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 26 11:17:00.177: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 26 11:17:02.194: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:17:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-25x8v" for this suite. Dec 26 11:17:44.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:17:44.318: INFO: namespace: e2e-tests-events-25x8v, resource: bindings, ignored listing per whitelist Dec 26 11:17:44.477: INFO: namespace e2e-tests-events-25x8v deletion completed in 42.250827012s • [SLOW TEST:56.669 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:17:44.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:17:44.765: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-j4f95" to be "success or failure" Dec 26 11:17:44.772: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.417796ms Dec 26 11:17:46.810: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04479465s Dec 26 11:17:48.838: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073212411s Dec 26 11:17:51.331: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566200037s Dec 26 11:17:53.387: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622136014s Dec 26 11:17:55.400: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.634573891s STEP: Saw pod success Dec 26 11:17:55.400: INFO: Pod "downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:17:55.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:17:56.244: INFO: Waiting for pod downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:17:56.511: INFO: Pod downwardapi-volume-56e1debb-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:17:56.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j4f95" for this suite. Dec 26 11:18:02.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:18:02.650: INFO: namespace: e2e-tests-projected-j4f95, resource: bindings, ignored listing per whitelist Dec 26 11:18:02.835: INFO: namespace e2e-tests-projected-j4f95 deletion completed in 6.295371866s • [SLOW TEST:18.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:18:02.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:18:03.443: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"61e84538-27d1-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001a8b4da), BlockOwnerDeletion:(*bool)(0xc001a8b4db)}} Dec 26 11:18:03.491: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"61d0949c-27d1-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d724a2), BlockOwnerDeletion:(*bool)(0xc001d724a3)}} Dec 26 11:18:03.710: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"61d3a7ca-27d1-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001a98d92), BlockOwnerDeletion:(*bool)(0xc001a98d93)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:18:08.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vhvw6" for this suite. Dec 26 11:18:16.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:18:17.074: INFO: namespace: e2e-tests-gc-vhvw6, resource: bindings, ignored listing per whitelist Dec 26 11:18:17.087: INFO: namespace e2e-tests-gc-vhvw6 deletion completed in 8.320412086s • [SLOW TEST:14.251 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:18:17.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6a4efeae-27d1-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 11:18:17.378: INFO: Waiting up to 5m0s for pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-v6f2p" to be "success or failure" Dec 26 11:18:17.484: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 106.097645ms Dec 26 11:18:19.504: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12612675s Dec 26 11:18:21.514: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135742896s Dec 26 11:18:23.538: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160211713s Dec 26 11:18:25.562: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183648816s Dec 26 11:18:27.576: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198178888s STEP: Saw pod success Dec 26 11:18:27.576: INFO: Pod "pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:18:27.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 26 11:18:28.130: INFO: Waiting for pod pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:18:28.373: INFO: Pod pod-secrets-6a5034ee-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:18:28.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v6f2p" for this suite. Dec 26 11:18:34.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:18:34.785: INFO: namespace: e2e-tests-secrets-v6f2p, resource: bindings, ignored listing per whitelist Dec 26 11:18:34.836: INFO: namespace e2e-tests-secrets-v6f2p deletion completed in 6.446200378s • [SLOW TEST:17.749 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:18:34.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Dec 26 11:18:35.007: INFO: Waiting up to 5m0s for pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-containers-s4wh9" to be "success or failure" Dec 26 11:18:35.036: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.089675ms Dec 26 11:18:37.107: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10001142s Dec 26 11:18:39.135: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127839495s Dec 26 11:18:41.374: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366391669s Dec 26 11:18:43.387: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379842891s Dec 26 11:18:45.409: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.401375456s STEP: Saw pod success Dec 26 11:18:45.409: INFO: Pod "client-containers-74d46b88-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:18:45.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-74d46b88-27d1-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:18:46.221: INFO: Waiting for pod client-containers-74d46b88-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:18:46.498: INFO: Pod client-containers-74d46b88-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:18:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-s4wh9" for this suite. Dec 26 11:18:52.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:18:52.865: INFO: namespace: e2e-tests-containers-s4wh9, resource: bindings, ignored listing per whitelist Dec 26 11:18:52.932: INFO: namespace e2e-tests-containers-s4wh9 deletion completed in 6.354349768s • [SLOW TEST:18.095 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:18:52.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 26 11:18:53.171: INFO: Waiting up to 5m0s for pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-ttfh6" to be "success or failure" Dec 26 11:18:53.179: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.306908ms Dec 26 11:18:55.397: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226002388s Dec 26 11:18:57.422: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250497244s Dec 26 11:18:59.623: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451880706s Dec 26 11:19:01.637: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.465880047s Dec 26 11:19:03.653: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482069569s STEP: Saw pod success Dec 26 11:19:03.654: INFO: Pod "pod-7fa84e90-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:19:03.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7fa84e90-27d1-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:19:03.832: INFO: Waiting for pod pod-7fa84e90-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:19:03.839: INFO: Pod pod-7fa84e90-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:19:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ttfh6" for this suite. Dec 26 11:19:09.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:19:10.042: INFO: namespace: e2e-tests-emptydir-ttfh6, resource: bindings, ignored listing per whitelist Dec 26 11:19:10.080: INFO: namespace e2e-tests-emptydir-ttfh6 deletion completed in 6.234768027s • [SLOW TEST:17.148 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:19:10.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-89e3af28-27d1-11ea-948a-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-89e3aeb1-27d1-11ea-948a-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 26 11:19:10.429: INFO: Waiting up to 5m0s for pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-sp97k" to be "success or failure" Dec 26 11:19:10.454: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.644759ms Dec 26 11:19:12.483: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053286791s Dec 26 11:19:14.501: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071363059s Dec 26 11:19:16.544: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114636902s Dec 26 11:19:18.562: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132124938s Dec 26 11:19:20.929: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.499228666s Dec 26 11:19:22.945: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.515717056s STEP: Saw pod success Dec 26 11:19:22.946: INFO: Pod "projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:19:22.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005 container projected-all-volume-test: STEP: delete the pod Dec 26 11:19:23.148: INFO: Waiting for pod projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:19:23.188: INFO: Pod projected-volume-89e3ae14-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:19:23.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sp97k" for this suite. Dec 26 11:19:31.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:19:31.331: INFO: namespace: e2e-tests-projected-sp97k, resource: bindings, ignored listing per whitelist Dec 26 11:19:31.398: INFO: namespace e2e-tests-projected-sp97k deletion completed in 8.191933825s • [SLOW TEST:21.318 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:19:31.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 26 11:19:31.570: INFO: Waiting up to 5m0s for pod "pod-968b21fe-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-p9m6s" to be "success or failure" Dec 26 11:19:31.604: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.655969ms Dec 26 11:19:33.848: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277639596s Dec 26 11:19:35.863: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29304256s Dec 26 11:19:38.072: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501412673s Dec 26 11:19:40.097: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526818293s Dec 26 11:19:42.195: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.625181381s STEP: Saw pod success Dec 26 11:19:42.196: INFO: Pod "pod-968b21fe-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:19:42.204: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-968b21fe-27d1-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:19:42.520: INFO: Waiting for pod pod-968b21fe-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:19:42.542: INFO: Pod pod-968b21fe-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:19:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p9m6s" for this suite. Dec 26 11:19:48.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:19:48.774: INFO: namespace: e2e-tests-emptydir-p9m6s, resource: bindings, ignored listing per whitelist Dec 26 11:19:48.779: INFO: namespace e2e-tests-emptydir-p9m6s deletion completed in 6.214586018s • [SLOW TEST:17.381 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:19:48.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:19:48.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-l9f2v" to be "success or failure" Dec 26 11:19:49.017: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.935291ms Dec 26 11:19:51.037: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059445914s Dec 26 11:19:53.068: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090369455s Dec 26 11:19:55.084: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106543059s Dec 26 11:19:57.096: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118508252s Dec 26 11:19:59.109: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131524481s STEP: Saw pod success Dec 26 11:19:59.109: INFO: Pod "downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:19:59.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:19:59.404: INFO: Waiting for pod downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:19:59.863: INFO: Pod downwardapi-volume-a0e9cc29-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:19:59.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l9f2v" for this suite. Dec 26 11:20:06.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:20:06.369: INFO: namespace: e2e-tests-downward-api-l9f2v, resource: bindings, ignored listing per whitelist Dec 26 11:20:06.410: INFO: namespace e2e-tests-downward-api-l9f2v deletion completed in 6.511799419s • [SLOW TEST:17.631 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:20:06.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2qbfl [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-2qbfl STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-2qbfl STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-2qbfl STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-2qbfl Dec 26 11:20:18.952: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2qbfl, name: ss-0, uid: b039a546-27d1-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Dec 26 11:20:22.502: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2qbfl, name: ss-0, uid: b039a546-27d1-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 26 11:20:22.698: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2qbfl, name: ss-0, uid: b039a546-27d1-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 26 11:20:22.723: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-2qbfl STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-2qbfl STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-2qbfl and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 26 11:20:33.991: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2qbfl Dec 26 11:20:34.000: INFO: Scaling statefulset ss to 0 Dec 26 11:20:44.093: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 11:20:44.096: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:20:44.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2qbfl" for this suite. Dec 26 11:20:52.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:20:52.312: INFO: namespace: e2e-tests-statefulset-2qbfl, resource: bindings, ignored listing per whitelist Dec 26 11:20:52.391: INFO: namespace e2e-tests-statefulset-2qbfl deletion completed in 8.256087934s • [SLOW TEST:45.980 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:20:52.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c6ee82dd-27d1-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:20:52.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-jrtms" to be "success or failure" Dec 26 11:20:52.799: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.324543ms Dec 26 11:20:55.012: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220651374s Dec 26 11:20:57.030: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239039767s Dec 26 11:20:59.361: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569904189s Dec 26 11:21:01.380: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588800854s Dec 26 11:21:03.392: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601206675s STEP: Saw pod success Dec 26 11:21:03.393: INFO: Pod "pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:21:03.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 26 11:21:03.859: INFO: Waiting for pod pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:21:04.244: INFO: Pod pod-projected-configmaps-c6f262a1-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:21:04.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jrtms" for this suite. Dec 26 11:21:10.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:21:10.663: INFO: namespace: e2e-tests-projected-jrtms, resource: bindings, ignored listing per whitelist Dec 26 11:21:10.727: INFO: namespace e2e-tests-projected-jrtms deletion completed in 6.458071423s • [SLOW TEST:18.336 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:21:10.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 26 11:21:20.380: INFO: 10 pods remaining Dec 26 11:21:20.380: INFO: 0 pods has nil DeletionTimestamp Dec 26 11:21:20.380: INFO: Dec 26 11:21:21.670: INFO: 0 pods remaining Dec 26 11:21:21.670: INFO: 0 pods has nil DeletionTimestamp Dec 26 11:21:21.670: INFO: Dec 26 11:21:23.032: INFO: 0 pods remaining Dec 26 11:21:23.033: INFO: 0 pods has nil DeletionTimestamp Dec 26 11:21:23.033: INFO: STEP: Gathering metrics W1226 11:21:23.689968 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 11:21:23.690: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:21:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9gfpb" for this suite. Dec 26 11:21:35.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:21:36.020: INFO: namespace: e2e-tests-gc-9gfpb, resource: bindings, ignored listing per whitelist Dec 26 11:21:36.031: INFO: namespace e2e-tests-gc-9gfpb deletion completed in 12.270848595s • [SLOW TEST:25.303 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:21:36.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-e0e0a902-27d1-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 11:21:36.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-b9j88" to be "success or failure" Dec 26 11:21:36.404: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.430334ms Dec 26 11:21:38.427: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110178555s Dec 26 11:21:40.627: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309676871s Dec 26 11:21:42.646: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329231212s Dec 26 11:21:44.675: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357368473s Dec 26 11:21:46.697: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37974611s STEP: Saw pod success Dec 26 11:21:46.697: INFO: Pod "pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:21:46.702: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 26 11:21:47.353: INFO: Waiting for pod pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005 to disappear Dec 26 11:21:47.712: INFO: Pod pod-projected-secrets-e0e1fec9-27d1-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:21:47.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b9j88" for this suite. Dec 26 11:21:53.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:21:54.162: INFO: namespace: e2e-tests-projected-b9j88, resource: bindings, ignored listing per whitelist Dec 26 11:21:54.319: INFO: namespace e2e-tests-projected-b9j88 deletion completed in 6.580512491s • [SLOW TEST:18.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:21:54.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-dx4st Dec 26 11:22:02.674: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-dx4st STEP: checking the pod's current state and verifying that restartCount is present Dec 26 11:22:02.677: INFO: Initial restart count of pod liveness-http is 0 Dec 26 11:22:17.529: INFO: Restart count of pod e2e-tests-container-probe-dx4st/liveness-http is now 1 (14.851437835s elapsed) Dec 26 11:22:37.746: INFO: Restart count of pod e2e-tests-container-probe-dx4st/liveness-http is now 2 (35.068470495s elapsed) Dec 26 11:22:58.086: INFO: Restart count of pod e2e-tests-container-probe-dx4st/liveness-http is now 3 (55.409255941s elapsed) Dec 26 11:23:18.307: INFO: Restart count of pod e2e-tests-container-probe-dx4st/liveness-http is now 4 (1m15.630178766s elapsed) Dec 26 11:24:17.140: INFO: Restart count of pod e2e-tests-container-probe-dx4st/liveness-http is now 5 (2m14.462862187s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:24:17.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dx4st" for this suite. Dec 26 11:24:23.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:24:23.733: INFO: namespace: e2e-tests-container-probe-dx4st, resource: bindings, ignored listing per whitelist Dec 26 11:24:23.833: INFO: namespace e2e-tests-container-probe-dx4st deletion completed in 6.433834816s • [SLOW TEST:149.514 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:24:23.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 26 11:24:24.099: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117370,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 11:24:24.100: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117370,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 26 11:24:34.119: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117383,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 26 11:24:34.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117383,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 26 11:24:44.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117396,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 11:24:44.164: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117396,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 26 11:24:54.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117408,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 11:24:54.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-a,UID:44e77785-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117408,Generation:0,CreationTimestamp:2019-12-26 11:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 26 11:25:04.214: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-b,UID:5ccf3f0a-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117421,Generation:0,CreationTimestamp:2019-12-26 11:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 11:25:04.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-b,UID:5ccf3f0a-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117421,Generation:0,CreationTimestamp:2019-12-26 11:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 26 11:25:14.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-b,UID:5ccf3f0a-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117434,Generation:0,CreationTimestamp:2019-12-26 11:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 11:25:14.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nct99,SelfLink:/api/v1/namespaces/e2e-tests-watch-nct99/configmaps/e2e-watch-test-configmap-b,UID:5ccf3f0a-27d2-11ea-a994-fa163e34d433,ResourceVersion:16117434,Generation:0,CreationTimestamp:2019-12-26 11:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:25:24.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nct99" for this suite. Dec 26 11:25:30.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:25:30.873: INFO: namespace: e2e-tests-watch-nct99, resource: bindings, ignored listing per whitelist Dec 26 11:25:30.895: INFO: namespace e2e-tests-watch-nct99 deletion completed in 6.581033784s • [SLOW TEST:67.061 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:25:30.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 26 11:25:31.404: INFO: Number of nodes with available pods: 0 Dec 26 11:25:31.404: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:32.451: INFO: Number of nodes with available pods: 0 Dec 26 11:25:32.451: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:33.697: INFO: Number of nodes with available pods: 0 Dec 26 11:25:33.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:34.440: INFO: Number of nodes with available pods: 0 Dec 26 11:25:34.440: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:35.447: INFO: Number of nodes with available pods: 0 Dec 26 11:25:35.447: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:36.668: INFO: Number of nodes with available pods: 0 Dec 26 11:25:36.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:37.461: INFO: Number of nodes with available pods: 0 Dec 26 11:25:37.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:38.741: INFO: Number of nodes with available pods: 0 Dec 26 11:25:38.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:39.454: INFO: Number of nodes with available pods: 0 Dec 26 11:25:39.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:40.442: INFO: Number of nodes with available pods: 0 Dec 26 11:25:40.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:25:41.443: INFO: Number of nodes with available pods: 1 Dec 26 11:25:41.443: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 26 11:25:41.523: INFO: Number of nodes with available pods: 1 Dec 26 11:25:41.523: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2v76c, will wait for the garbage collector to delete the pods Dec 26 11:25:42.672: INFO: Deleting DaemonSet.extensions daemon-set took: 26.583454ms Dec 26 11:25:44.273: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.600977351s Dec 26 11:25:52.695: INFO: Number of nodes with available pods: 0 Dec 26 11:25:52.695: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 11:25:52.739: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2v76c/daemonsets","resourceVersion":"16117521"},"items":null} Dec 26 11:25:52.773: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2v76c/pods","resourceVersion":"16117522"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:25:52.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2v76c" for this suite. Dec 26 11:25:58.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:25:59.072: INFO: namespace: e2e-tests-daemonsets-2v76c, resource: bindings, ignored listing per whitelist Dec 26 11:25:59.283: INFO: namespace e2e-tests-daemonsets-2v76c deletion completed in 6.486594405s • [SLOW TEST:28.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:25:59.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:25:59.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-dltwh" to be "success or failure" Dec 26 11:25:59.557: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.480117ms Dec 26 11:26:01.576: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02644607s Dec 26 11:26:03.608: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058496206s Dec 26 11:26:05.746: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196404482s Dec 26 11:26:07.766: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217255963s Dec 26 11:26:09.787: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.23748199s STEP: Saw pod success Dec 26 11:26:09.787: INFO: Pod "downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:26:09.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:26:10.489: INFO: Waiting for pod downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005 to disappear Dec 26 11:26:10.778: INFO: Pod downwardapi-volume-7dbc3478-27d2-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:26:10.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dltwh" for this suite. Dec 26 11:26:16.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:26:17.103: INFO: namespace: e2e-tests-projected-dltwh, resource: bindings, ignored listing per whitelist Dec 26 11:26:17.148: INFO: namespace e2e-tests-projected-dltwh deletion completed in 6.363336051s • [SLOW TEST:17.864 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:26:17.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:26:17.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Dec 26 11:26:17.433: INFO: stderr: "" Dec 26 11:26:17.433: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Dec 26 11:26:17.441: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:26:17.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xxjkt" for this suite. Dec 26 11:26:23.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:26:23.755: INFO: namespace: e2e-tests-kubectl-xxjkt, resource: bindings, ignored listing per whitelist Dec 26 11:26:23.787: INFO: namespace e2e-tests-kubectl-xxjkt deletion completed in 6.310038817s S [SKIPPING] [6.637 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:26:17.441: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:26:23.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8c6e2e92-27d2-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:26:24.144: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-lgkgh" to be "success or failure" Dec 26 11:26:24.159: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.37339ms Dec 26 11:26:26.714: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569905512s Dec 26 11:26:28.749: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605161152s Dec 26 11:26:31.308: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.163567723s Dec 26 11:26:33.330: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.185814696s Dec 26 11:26:35.344: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.200386686s STEP: Saw pod success Dec 26 11:26:35.344: INFO: Pod "pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:26:35.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 26 11:26:36.698: INFO: Waiting for pod pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005 to disappear Dec 26 11:26:36.705: INFO: Pod pod-projected-configmaps-8c722c1f-27d2-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:26:36.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lgkgh" for this suite. Dec 26 11:26:42.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:26:42.974: INFO: namespace: e2e-tests-projected-lgkgh, resource: bindings, ignored listing per whitelist Dec 26 11:26:43.075: INFO: namespace e2e-tests-projected-lgkgh deletion completed in 6.350823347s • [SLOW TEST:19.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:26:43.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lg9cd [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Dec 26 11:26:43.333: INFO: Found 0 stateful pods, waiting for 3 Dec 26 11:26:53.340: INFO: Found 2 stateful pods, waiting for 3 Dec 26 11:27:03.356: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:03.356: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:03.356: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 11:27:13.352: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:13.352: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:13.352: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 26 11:27:13.399: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 26 11:27:23.492: INFO: Updating stateful set ss2 Dec 26 11:27:23.509: INFO: Waiting for Pod e2e-tests-statefulset-lg9cd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 26 11:27:35.696: INFO: Found 2 stateful pods, waiting for 3 Dec 26 11:27:45.718: INFO: Found 2 stateful pods, waiting for 3 Dec 26 11:27:55.734: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:55.734: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:27:55.734: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 11:28:05.710: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:28:05.710: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 11:28:05.710: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 26 11:28:05.753: INFO: Updating stateful set ss2 Dec 26 11:28:05.842: INFO: Waiting for Pod e2e-tests-statefulset-lg9cd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 11:28:15.958: INFO: Waiting for Pod e2e-tests-statefulset-lg9cd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 11:28:26.013: INFO: Updating stateful set ss2 Dec 26 11:28:26.218: INFO: Waiting for StatefulSet e2e-tests-statefulset-lg9cd/ss2 to complete update Dec 26 11:28:26.218: INFO: Waiting for Pod e2e-tests-statefulset-lg9cd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 11:28:36.468: INFO: Waiting for StatefulSet e2e-tests-statefulset-lg9cd/ss2 to complete update Dec 26 11:28:36.469: INFO: Waiting for Pod e2e-tests-statefulset-lg9cd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 11:28:47.379: INFO: Waiting for StatefulSet e2e-tests-statefulset-lg9cd/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 26 11:28:56.251: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lg9cd Dec 26 11:28:56.262: INFO: Scaling statefulset ss2 to 0 Dec 26 11:29:36.343: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 11:29:36.352: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:29:36.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lg9cd" for this suite. Dec 26 11:29:44.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:29:44.564: INFO: namespace: e2e-tests-statefulset-lg9cd, resource: bindings, ignored listing per whitelist Dec 26 11:29:44.586: INFO: namespace e2e-tests-statefulset-lg9cd deletion completed in 8.177720635s • [SLOW TEST:181.511 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:29:44.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 26 11:29:44.802: INFO: Waiting up to 5m0s for pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-var-expansion-8v4wd" to be "success or failure" Dec 26 11:29:44.812: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102708ms Dec 26 11:29:46.829: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027036293s Dec 26 11:29:48.900: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098692103s Dec 26 11:29:50.917: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115669231s Dec 26 11:29:52.939: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137175394s Dec 26 11:29:54.949: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147548804s STEP: Saw pod success Dec 26 11:29:54.949: INFO: Pod "var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:29:54.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005 container dapi-container: STEP: delete the pod Dec 26 11:29:55.036: INFO: Waiting for pod var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:29:55.047: INFO: Pod var-expansion-040e3b0c-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:29:55.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8v4wd" for this suite. Dec 26 11:30:01.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:30:01.401: INFO: namespace: e2e-tests-var-expansion-8v4wd, resource: bindings, ignored listing per whitelist Dec 26 11:30:01.439: INFO: namespace e2e-tests-var-expansion-8v4wd deletion completed in 6.386057636s • [SLOW TEST:16.853 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:30:01.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-78nk STEP: Creating a pod to test atomic-volume-subpath Dec 26 11:30:01.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-78nk" in namespace "e2e-tests-subpath-6ttr7" to be "success or failure" Dec 26 11:30:01.719: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 25.371685ms Dec 26 11:30:03.987: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293987756s Dec 26 11:30:06.011: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317390462s Dec 26 11:30:08.146: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452577137s Dec 26 11:30:10.159: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465535112s Dec 26 11:30:12.174: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.480244373s Dec 26 11:30:14.194: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.500637653s Dec 26 11:30:16.224: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.530503898s Dec 26 11:30:18.303: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 16.609649321s Dec 26 11:30:20.329: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 18.635828126s Dec 26 11:30:22.360: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 20.666401007s Dec 26 11:30:24.372: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 22.678163034s Dec 26 11:30:26.393: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 24.699632441s Dec 26 11:30:28.437: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 26.74367533s Dec 26 11:30:30.468: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 28.774345456s Dec 26 11:30:32.520: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 30.826966081s Dec 26 11:30:34.547: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Running", Reason="", readiness=false. Elapsed: 32.853099484s Dec 26 11:30:36.605: INFO: Pod "pod-subpath-test-downwardapi-78nk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.911906598s STEP: Saw pod success Dec 26 11:30:36.606: INFO: Pod "pod-subpath-test-downwardapi-78nk" satisfied condition "success or failure" Dec 26 11:30:36.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-78nk container test-container-subpath-downwardapi-78nk: STEP: delete the pod Dec 26 11:30:37.160: INFO: Waiting for pod pod-subpath-test-downwardapi-78nk to disappear Dec 26 11:30:37.192: INFO: Pod pod-subpath-test-downwardapi-78nk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-78nk Dec 26 11:30:37.193: INFO: Deleting pod "pod-subpath-test-downwardapi-78nk" in namespace "e2e-tests-subpath-6ttr7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:30:37.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6ttr7" for this suite. Dec 26 11:30:45.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:30:45.578: INFO: namespace: e2e-tests-subpath-6ttr7, resource: bindings, ignored listing per whitelist Dec 26 11:30:45.594: INFO: namespace e2e-tests-subpath-6ttr7 deletion completed in 8.274583811s • [SLOW TEST:44.155 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:30:45.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-2872aea3-27d3-11ea-948a-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-2872af5f-27d3-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2872aea3-27d3-11ea-948a-0242ac110005 STEP: Updating configmap cm-test-opt-upd-2872af5f-27d3-11ea-948a-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-2872afaf-27d3-11ea-948a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:32:16.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z84pb" for this suite. Dec 26 11:32:42.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:32:42.986: INFO: namespace: e2e-tests-configmap-z84pb, resource: bindings, ignored listing per whitelist Dec 26 11:32:43.184: INFO: namespace e2e-tests-configmap-z84pb deletion completed in 26.475857751s • [SLOW TEST:117.590 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:32:43.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6e8f5558-27d3-11ea-948a-0242ac110005 STEP: Creating a pod to test consume secrets Dec 26 11:32:43.566: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-cvf2w" to be "success or failure" Dec 26 11:32:43.575: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.328146ms Dec 26 11:32:45.670: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104166667s Dec 26 11:32:47.682: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115885147s Dec 26 11:32:50.054: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487888852s Dec 26 11:32:52.067: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500858378s Dec 26 11:32:54.078: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.512513988s STEP: Saw pod success Dec 26 11:32:54.078: INFO: Pod "pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:32:54.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 26 11:32:54.404: INFO: Waiting for pod pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:32:54.428: INFO: Pod pod-projected-secrets-6e9a12d2-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:32:54.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cvf2w" for this suite. Dec 26 11:33:00.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:33:00.703: INFO: namespace: e2e-tests-projected-cvf2w, resource: bindings, ignored listing per whitelist Dec 26 11:33:00.863: INFO: namespace e2e-tests-projected-cvf2w deletion completed in 6.256276313s • [SLOW TEST:17.678 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:33:00.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:33:01.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-4vn84" to be "success or failure" Dec 26 11:33:01.096: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.114505ms Dec 26 11:33:03.105: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0293029s Dec 26 11:33:05.130: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05425449s Dec 26 11:33:07.299: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222943929s Dec 26 11:33:09.328: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252410362s Dec 26 11:33:11.348: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.272047819s STEP: Saw pod success Dec 26 11:33:11.348: INFO: Pod "downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:33:11.361: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:33:11.555: INFO: Waiting for pod downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:33:11.571: INFO: Pod downwardapi-volume-790b67ff-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:33:11.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4vn84" for this suite. Dec 26 11:33:17.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:33:17.805: INFO: namespace: e2e-tests-projected-4vn84, resource: bindings, ignored listing per whitelist Dec 26 11:33:17.811: INFO: namespace e2e-tests-projected-4vn84 deletion completed in 6.232182994s • [SLOW TEST:16.948 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:33:17.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 26 11:33:17.963: INFO: Waiting up to 5m0s for pod "pod-831d3629-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-xbmmf" to be "success or failure" Dec 26 11:33:17.974: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590527ms Dec 26 11:33:19.997: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0335372s Dec 26 11:33:22.097: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132977005s Dec 26 11:33:24.114: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149984239s Dec 26 11:33:26.128: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164573961s Dec 26 11:33:28.238: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.274422258s STEP: Saw pod success Dec 26 11:33:28.238: INFO: Pod "pod-831d3629-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:33:28.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-831d3629-27d3-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:33:29.229: INFO: Waiting for pod pod-831d3629-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:33:29.256: INFO: Pod pod-831d3629-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:33:29.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xbmmf" for this suite. Dec 26 11:33:35.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:33:35.480: INFO: namespace: e2e-tests-emptydir-xbmmf, resource: bindings, ignored listing per whitelist Dec 26 11:33:35.543: INFO: namespace e2e-tests-emptydir-xbmmf deletion completed in 6.263721109s • [SLOW TEST:17.731 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:33:35.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Dec 26 11:33:35.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-56fwp' Dec 26 11:33:37.941: INFO: stderr: "" Dec 26 11:33:37.941: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Dec 26 11:33:39.226: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:39.226: INFO: Found 0 / 1 Dec 26 11:33:40.129: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:40.129: INFO: Found 0 / 1 Dec 26 11:33:40.962: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:40.962: INFO: Found 0 / 1 Dec 26 11:33:41.958: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:41.958: INFO: Found 0 / 1 Dec 26 11:33:43.691: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:43.691: INFO: Found 0 / 1 Dec 26 11:33:44.198: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:44.198: INFO: Found 0 / 1 Dec 26 11:33:44.963: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:44.963: INFO: Found 0 / 1 Dec 26 11:33:45.951: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:45.951: INFO: Found 0 / 1 Dec 26 11:33:46.957: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:46.958: INFO: Found 0 / 1 Dec 26 11:33:48.006: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:48.007: INFO: Found 1 / 1 Dec 26 11:33:48.007: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 26 11:33:48.019: INFO: Selector matched 1 pods for map[app:redis] Dec 26 11:33:48.019: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 26 11:33:48.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp' Dec 26 11:33:48.196: INFO: stderr: "" Dec 26 11:33:48.196: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Dec 11:33:46.082 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 11:33:46.083 # Server started, Redis version 3.2.12\n1:M 26 Dec 11:33:46.083 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 11:33:46.083 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 26 11:33:48.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp --tail=1' Dec 26 11:33:48.450: INFO: stderr: "" Dec 26 11:33:48.450: INFO: stdout: "1:M 26 Dec 11:33:46.083 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 26 11:33:48.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp --limit-bytes=1' Dec 26 11:33:48.594: INFO: stderr: "" Dec 26 11:33:48.594: INFO: stdout: " " STEP: exposing timestamps Dec 26 11:33:48.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp --tail=1 --timestamps' Dec 26 11:33:48.738: INFO: stderr: "" Dec 26 11:33:48.738: INFO: stdout: "2019-12-26T11:33:46.166834894Z 1:M 26 Dec 11:33:46.083 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 26 11:33:51.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp --since=1s' Dec 26 11:33:51.507: INFO: stderr: "" Dec 26 11:33:51.507: INFO: stdout: "" Dec 26 11:33:51.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bwjpq redis-master --namespace=e2e-tests-kubectl-56fwp --since=24h' Dec 26 11:33:51.691: INFO: stderr: "" Dec 26 11:33:51.691: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Dec 11:33:46.082 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 11:33:46.083 # Server started, Redis version 3.2.12\n1:M 26 Dec 11:33:46.083 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 11:33:46.083 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Dec 26 11:33:51.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-56fwp' Dec 26 11:33:51.834: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:33:51.834: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 26 11:33:51.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-56fwp' Dec 26 11:33:52.041: INFO: stderr: "No resources found.\n" Dec 26 11:33:52.041: INFO: stdout: "" Dec 26 11:33:52.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-56fwp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 26 11:33:52.281: INFO: stderr: "" Dec 26 11:33:52.281: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:33:52.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-56fwp" for this suite. Dec 26 11:34:16.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:34:16.407: INFO: namespace: e2e-tests-kubectl-56fwp, resource: bindings, ignored listing per whitelist Dec 26 11:34:16.722: INFO: namespace e2e-tests-kubectl-56fwp deletion completed in 24.418242605s • [SLOW TEST:41.179 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:34:16.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a646595d-27d3-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:34:17.049: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-27lkv" to be "success or failure" Dec 26 11:34:17.081: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.201779ms Dec 26 11:34:19.128: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079288169s Dec 26 11:34:21.156: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107707125s Dec 26 11:34:24.090: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.041156419s Dec 26 11:34:26.134: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.085000041s Dec 26 11:34:28.147: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.098317057s STEP: Saw pod success Dec 26 11:34:28.147: INFO: Pod "pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:34:28.152: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 26 11:34:28.443: INFO: Waiting for pod pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:34:28.453: INFO: Pod pod-projected-configmaps-a6519f4b-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:34:28.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-27lkv" for this suite. Dec 26 11:34:34.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:34:34.785: INFO: namespace: e2e-tests-projected-27lkv, resource: bindings, ignored listing per whitelist Dec 26 11:34:34.814: INFO: namespace e2e-tests-projected-27lkv deletion completed in 6.347247012s • [SLOW TEST:18.091 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:34:34.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-b1158cdc-27d3-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:34:35.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-vgmh7" to be "success or failure" Dec 26 11:34:35.135: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.447799ms Dec 26 11:34:37.156: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053760348s Dec 26 11:34:39.178: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076299574s Dec 26 11:34:41.459: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357194786s Dec 26 11:34:43.468: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365834124s Dec 26 11:34:45.481: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.379159875s STEP: Saw pod success Dec 26 11:34:45.481: INFO: Pod "pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:34:45.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 26 11:34:45.666: INFO: Waiting for pod pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005 to disappear Dec 26 11:34:45.674: INFO: Pod pod-configmaps-b116d545-27d3-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:34:45.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vgmh7" for this suite. Dec 26 11:34:51.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:34:51.994: INFO: namespace: e2e-tests-configmap-vgmh7, resource: bindings, ignored listing per whitelist Dec 26 11:34:52.023: INFO: namespace e2e-tests-configmap-vgmh7 deletion completed in 6.342698088s • [SLOW TEST:17.209 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:34:52.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-tt2d2 in namespace e2e-tests-proxy-rqbpc I1226 11:34:52.633145 8 runners.go:184] Created replication controller with name: proxy-service-tt2d2, namespace: e2e-tests-proxy-rqbpc, replica count: 1 I1226 11:34:53.684276 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:54.684680 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:55.685243 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:56.685857 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:57.686238 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:58.686985 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:34:59.687339 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:35:00.687689 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:35:01.688596 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 11:35:02.689148 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:03.689495 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:04.690100 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:05.690509 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:06.691213 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:07.691679 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:08.692010 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:09.693101 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 11:35:10.693587 8 runners.go:184] proxy-service-tt2d2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 26 11:35:10.710: INFO: setup took 18.420479621s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 26 11:35:10.755: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rqbpc/pods/http:proxy-service-tt2d2-56cff:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:35:44.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pwjrt" for this suite. Dec 26 11:36:10.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:36:10.731: INFO: namespace: e2e-tests-replication-controller-pwjrt, resource: bindings, ignored listing per whitelist Dec 26 11:36:10.864: INFO: namespace e2e-tests-replication-controller-pwjrt deletion completed in 26.313366019s • [SLOW TEST:39.779 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:36:10.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:36:21.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6sbtj" for this suite. Dec 26 11:37:15.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:37:15.368: INFO: namespace: e2e-tests-kubelet-test-6sbtj, resource: bindings, ignored listing per whitelist Dec 26 11:37:15.456: INFO: namespace e2e-tests-kubelet-test-6sbtj deletion completed in 54.295026093s • [SLOW TEST:64.591 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:37:15.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 26 11:37:15.711: INFO: Waiting up to 5m0s for pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-29hhd" to be "success or failure" Dec 26 11:37:15.725: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.248758ms Dec 26 11:37:17.766: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054896807s Dec 26 11:37:19.783: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072531364s Dec 26 11:37:21.813: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10180735s Dec 26 11:37:23.830: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119154819s Dec 26 11:37:25.856: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144783389s STEP: Saw pod success Dec 26 11:37:25.856: INFO: Pod "downward-api-10cf06a2-27d4-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:37:25.870: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-10cf06a2-27d4-11ea-948a-0242ac110005 container dapi-container: STEP: delete the pod Dec 26 11:37:25.967: INFO: Waiting for pod downward-api-10cf06a2-27d4-11ea-948a-0242ac110005 to disappear Dec 26 11:37:25.977: INFO: Pod downward-api-10cf06a2-27d4-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:37:25.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-29hhd" for this suite. Dec 26 11:37:32.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:37:32.229: INFO: namespace: e2e-tests-downward-api-29hhd, resource: bindings, ignored listing per whitelist Dec 26 11:37:32.287: INFO: namespace e2e-tests-downward-api-29hhd deletion completed in 6.300766679s • [SLOW TEST:16.831 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:37:32.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:37:32.647: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 26 11:37:32.670: INFO: Number of nodes with available pods: 0 Dec 26 11:37:32.670: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 26 11:37:32.880: INFO: Number of nodes with available pods: 0 Dec 26 11:37:32.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:33.969: INFO: Number of nodes with available pods: 0 Dec 26 11:37:33.969: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:34.906: INFO: Number of nodes with available pods: 0 Dec 26 11:37:34.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:35.898: INFO: Number of nodes with available pods: 0 Dec 26 11:37:35.898: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:36.894: INFO: Number of nodes with available pods: 0 Dec 26 11:37:36.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:38.191: INFO: Number of nodes with available pods: 0 Dec 26 11:37:38.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:38.896: INFO: Number of nodes with available pods: 0 Dec 26 11:37:38.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:39.907: INFO: Number of nodes with available pods: 0 Dec 26 11:37:39.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:40.894: INFO: Number of nodes with available pods: 0 Dec 26 11:37:40.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:41.897: INFO: Number of nodes with available pods: 1 Dec 26 11:37:41.897: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 26 11:37:41.988: INFO: Number of nodes with available pods: 1 Dec 26 11:37:41.988: INFO: Number of running nodes: 0, number of available pods: 1 Dec 26 11:37:43.000: INFO: Number of nodes with available pods: 0 Dec 26 11:37:43.000: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 26 11:37:43.033: INFO: Number of nodes with available pods: 0 Dec 26 11:37:43.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:44.064: INFO: Number of nodes with available pods: 0 Dec 26 11:37:44.065: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:45.198: INFO: Number of nodes with available pods: 0 Dec 26 11:37:45.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:46.045: INFO: Number of nodes with available pods: 0 Dec 26 11:37:46.045: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:47.189: INFO: Number of nodes with available pods: 0 Dec 26 11:37:47.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:48.050: INFO: Number of nodes with available pods: 0 Dec 26 11:37:48.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:49.044: INFO: Number of nodes with available pods: 0 Dec 26 11:37:49.044: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:50.048: INFO: Number of nodes with available pods: 0 Dec 26 11:37:50.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:51.050: INFO: Number of nodes with available pods: 0 Dec 26 11:37:51.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:52.070: INFO: Number of nodes with available pods: 0 Dec 26 11:37:52.070: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:53.179: INFO: Number of nodes with available pods: 0 Dec 26 11:37:53.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:54.317: INFO: Number of nodes with available pods: 0 Dec 26 11:37:54.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:55.053: INFO: Number of nodes with available pods: 0 Dec 26 11:37:55.053: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:56.107: INFO: Number of nodes with available pods: 0 Dec 26 11:37:56.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:57.065: INFO: Number of nodes with available pods: 0 Dec 26 11:37:57.065: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:58.057: INFO: Number of nodes with available pods: 0 Dec 26 11:37:58.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:37:59.248: INFO: Number of nodes with available pods: 0 Dec 26 11:37:59.248: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:38:00.052: INFO: Number of nodes with available pods: 0 Dec 26 11:38:00.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:38:01.061: INFO: Number of nodes with available pods: 0 Dec 26 11:38:01.061: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:38:02.064: INFO: Number of nodes with available pods: 0 Dec 26 11:38:02.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:38:03.054: INFO: Number of nodes with available pods: 1 Dec 26 11:38:03.054: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lgcr7, will wait for the garbage collector to delete the pods Dec 26 11:38:03.159: INFO: Deleting DaemonSet.extensions daemon-set took: 19.446182ms Dec 26 11:38:03.360: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.51939ms Dec 26 11:38:12.666: INFO: Number of nodes with available pods: 0 Dec 26 11:38:12.666: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 11:38:12.679: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lgcr7/daemonsets","resourceVersion":"16119163"},"items":null} Dec 26 11:38:12.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lgcr7/pods","resourceVersion":"16119163"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:38:12.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lgcr7" for this suite. Dec 26 11:38:20.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:38:20.955: INFO: namespace: e2e-tests-daemonsets-lgcr7, resource: bindings, ignored listing per whitelist Dec 26 11:38:20.961: INFO: namespace e2e-tests-daemonsets-lgcr7 deletion completed in 8.174070565s • [SLOW TEST:48.672 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:38:20.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 26 11:38:21.204: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 26 11:38:26.226: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:38:27.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-47m2c" for this suite. Dec 26 11:38:36.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:38:36.224: INFO: namespace: e2e-tests-replication-controller-47m2c, resource: bindings, ignored listing per whitelist Dec 26 11:38:36.257: INFO: namespace e2e-tests-replication-controller-47m2c deletion completed in 8.958673365s • [SLOW TEST:15.296 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:38:36.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 26 11:38:48.778: INFO: Successfully updated pod "annotationupdate41b0086e-27d4-11ea-948a-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:38:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zc9fk" for this suite. Dec 26 11:39:15.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:39:15.212: INFO: namespace: e2e-tests-projected-zc9fk, resource: bindings, ignored listing per whitelist Dec 26 11:39:15.285: INFO: namespace e2e-tests-projected-zc9fk deletion completed in 24.330035678s • [SLOW TEST:39.028 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:39:15.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pn75q STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 11:39:15.516: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 11:39:57.948: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-pn75q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 11:39:57.948: INFO: >>> kubeConfig: /root/.kube/config Dec 26 11:39:59.376: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:39:59.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pn75q" for this suite. Dec 26 11:40:25.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:40:25.680: INFO: namespace: e2e-tests-pod-network-test-pn75q, resource: bindings, ignored listing per whitelist Dec 26 11:40:25.684: INFO: namespace e2e-tests-pod-network-test-pn75q deletion completed in 26.291644114s • [SLOW TEST:70.398 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:40:25.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 26 11:40:26.099: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 26 11:40:26.114: INFO: Waiting for terminating namespaces to be deleted... Dec 26 11:40:26.119: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 26 11:40:26.136: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:40:26.136: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 26 11:40:26.136: INFO: Container weave ready: true, restart count 0 Dec 26 11:40:26.136: INFO: Container weave-npc ready: true, restart count 0 Dec 26 11:40:26.136: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 26 11:40:26.136: INFO: Container coredns ready: true, restart count 0 Dec 26 11:40:26.136: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:40:26.136: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:40:26.136: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:40:26.136: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 26 11:40:26.136: INFO: Container coredns ready: true, restart count 0 Dec 26 11:40:26.136: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 26 11:40:26.136: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e3e8f34cf6edce], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:40:27.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-t8pbk" for this suite. Dec 26 11:40:33.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:40:33.465: INFO: namespace: e2e-tests-sched-pred-t8pbk, resource: bindings, ignored listing per whitelist Dec 26 11:40:33.476: INFO: namespace e2e-tests-sched-pred-t8pbk deletion completed in 6.266972817s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.792 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:40:33.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W1226 11:40:37.292732 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 11:40:37.293: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:40:37.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cfvrv" for this suite. Dec 26 11:40:46.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:40:46.383: INFO: namespace: e2e-tests-gc-cfvrv, resource: bindings, ignored listing per whitelist Dec 26 11:40:46.389: INFO: namespace e2e-tests-gc-cfvrv deletion completed in 9.026818356s • [SLOW TEST:12.912 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:40:46.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 26 11:40:59.929: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:41:01.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6srqk" for this suite. Dec 26 11:44:19.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:44:19.713: INFO: namespace: e2e-tests-replicaset-6srqk, resource: bindings, ignored listing per whitelist Dec 26 11:44:19.933: INFO: namespace e2e-tests-replicaset-6srqk deletion completed in 3m18.866954054s • [SLOW TEST:213.544 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:44:19.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-92pq7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-92pq7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 11:44:36.369: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.378: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.386: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.396: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.403: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.408: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.414: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.423: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.430: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.442: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.448: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.456: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.478: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.503: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.516: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.541: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.562: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.583: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.605: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.617: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005) Dec 26 11:44:36.617: INFO: Lookups using e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-92pq7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 26 11:44:41.907: INFO: DNS probes using e2e-tests-dns-92pq7/dns-test-0dd2ce40-27d5-11ea-948a-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:44:41.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-92pq7" for this suite. Dec 26 11:44:50.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:44:50.372: INFO: namespace: e2e-tests-dns-92pq7, resource: bindings, ignored listing per whitelist Dec 26 11:44:50.410: INFO: namespace e2e-tests-dns-92pq7 deletion completed in 8.410509662s • [SLOW TEST:30.477 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:44:50.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-200a0a3f-27d5-11ea-948a-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-200a0d16-27d5-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-200a0a3f-27d5-11ea-948a-0242ac110005 STEP: Updating configmap cm-test-opt-upd-200a0d16-27d5-11ea-948a-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-200a0d46-27d5-11ea-948a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:45:11.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wqmfc" for this suite. Dec 26 11:45:37.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:45:37.245: INFO: namespace: e2e-tests-projected-wqmfc, resource: bindings, ignored listing per whitelist Dec 26 11:45:37.436: INFO: namespace e2e-tests-projected-wqmfc deletion completed in 26.269846584s • [SLOW TEST:47.026 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:45:37.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 26 11:45:37.703: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 26 11:45:37.780: INFO: Waiting for terminating namespaces to be deleted... Dec 26 11:45:37.784: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 26 11:45:37.806: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 26 11:45:37.806: INFO: Container coredns ready: true, restart count 0 Dec 26 11:45:37.806: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:45:37.806: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:45:37.806: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:45:37.806: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 26 11:45:37.806: INFO: Container coredns ready: true, restart count 0 Dec 26 11:45:37.806: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 26 11:45:37.806: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 11:45:37.806: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 26 11:45:37.806: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 26 11:45:37.806: INFO: Container weave ready: true, restart count 0 Dec 26 11:45:37.806: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4237502a-27d5-11ea-948a-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4237502a-27d5-11ea-948a-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-4237502a-27d5-11ea-948a-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:46:00.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-pftkc" for this suite. Dec 26 11:46:14.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:46:14.809: INFO: namespace: e2e-tests-sched-pred-pftkc, resource: bindings, ignored listing per whitelist Dec 26 11:46:14.855: INFO: namespace e2e-tests-sched-pred-pftkc deletion completed in 14.37758798s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:37.419 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:46:14.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-524d49fa-27d5-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:46:15.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-7ggnt" to be "success or failure" Dec 26 11:46:15.083: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.348433ms Dec 26 11:46:17.352: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277131066s Dec 26 11:46:19.385: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310633634s Dec 26 11:46:22.072: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.997338233s Dec 26 11:46:24.087: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.012396987s Dec 26 11:46:26.105: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.030503316s STEP: Saw pod success Dec 26 11:46:26.105: INFO: Pod "pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:46:26.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 26 11:46:26.769: INFO: Waiting for pod pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005 to disappear Dec 26 11:46:26.793: INFO: Pod pod-projected-configmaps-524e783c-27d5-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:46:26.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7ggnt" for this suite. Dec 26 11:46:32.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:46:33.287: INFO: namespace: e2e-tests-projected-7ggnt, resource: bindings, ignored listing per whitelist Dec 26 11:46:33.345: INFO: namespace e2e-tests-projected-7ggnt deletion completed in 6.5272578s • [SLOW TEST:18.490 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:46:33.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 26 11:46:33.572: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 26 11:46:33.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:36.188: INFO: stderr: "" Dec 26 11:46:36.189: INFO: stdout: "service/redis-slave created\n" Dec 26 11:46:36.189: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 26 11:46:36.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:36.934: INFO: stderr: "" Dec 26 11:46:36.934: INFO: stdout: "service/redis-master created\n" Dec 26 11:46:36.935: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 26 11:46:36.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:37.563: INFO: stderr: "" Dec 26 11:46:37.563: INFO: stdout: "service/frontend created\n" Dec 26 11:46:37.564: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 26 11:46:37.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:37.914: INFO: stderr: "" Dec 26 11:46:37.914: INFO: stdout: "deployment.extensions/frontend created\n" Dec 26 11:46:37.915: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 26 11:46:37.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:38.477: INFO: stderr: "" Dec 26 11:46:38.477: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 26 11:46:38.478: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 26 11:46:38.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:46:38.926: INFO: stderr: "" Dec 26 11:46:38.926: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 26 11:46:38.926: INFO: Waiting for all frontend pods to be Running. Dec 26 11:47:08.980: INFO: Waiting for frontend to serve content. Dec 26 11:47:10.151: INFO: Trying to add a new entry to the guestbook. Dec 26 11:47:10.195: INFO: Verifying that added entry can be retrieved. Dec 26 11:47:10.288: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 26 11:47:15.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:15.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:15.659: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 26 11:47:15.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:16.221: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:16.221: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 26 11:47:16.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:16.684: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:16.684: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 26 11:47:16.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:16.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:16.825: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 26 11:47:16.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:17.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:17.224: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 26 11:47:17.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfhc2' Dec 26 11:47:17.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 11:47:17.441: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:47:17.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sfhc2" for this suite. Dec 26 11:48:03.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:48:03.784: INFO: namespace: e2e-tests-kubectl-sfhc2, resource: bindings, ignored listing per whitelist Dec 26 11:48:03.787: INFO: namespace e2e-tests-kubectl-sfhc2 deletion completed in 46.271760717s • [SLOW TEST:90.441 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:48:03.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:48:04.235: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 26 11:48:09.251: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 26 11:48:15.275: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 26 11:48:15.343: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-9cljj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9cljj/deployments/test-cleanup-deployment,UID:99f86edb-27d5-11ea-a994-fa163e34d433,ResourceVersion:16120448,Generation:1,CreationTimestamp:2019-12-26 11:48:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Dec 26 11:48:15.361: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Dec 26 11:48:15.361: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Dec 26 11:48:15.362: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-9cljj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9cljj/replicasets/test-cleanup-controller,UID:934d3452-27d5-11ea-a994-fa163e34d433,ResourceVersion:16120449,Generation:1,CreationTimestamp:2019-12-26 11:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 99f86edb-27d5-11ea-a994-fa163e34d433 0xc00192e2e7 0xc00192e2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 26 11:48:15.491: INFO: Pod "test-cleanup-controller-q6dgg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-q6dgg,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-9cljj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9cljj/pods/test-cleanup-controller-q6dgg,UID:9362ae59-27d5-11ea-a994-fa163e34d433,ResourceVersion:16120443,Generation:0,CreationTimestamp:2019-12-26 11:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 934d3452-27d5-11ea-a994-fa163e34d433 0xc000834437 0xc000834438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-smrxk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-smrxk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-smrxk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0008345e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000834600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:48:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:48:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:48:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 11:48:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-26 11:48:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 11:48:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e59561194d725ad1da36b532773fbaeb6daac22f9b11d1063dbf709dd20e29ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:48:15.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9cljj" for this suite. Dec 26 11:48:24.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:48:24.784: INFO: namespace: e2e-tests-deployment-9cljj, resource: bindings, ignored listing per whitelist Dec 26 11:48:26.383: INFO: namespace e2e-tests-deployment-9cljj deletion completed in 10.88269446s • [SLOW TEST:22.595 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:48:26.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-dfm5t Dec 26 11:48:37.199: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-dfm5t STEP: checking the pod's current state and verifying that restartCount is present Dec 26 11:48:37.213: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:52:38.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dfm5t" for this suite. Dec 26 11:52:46.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:52:46.410: INFO: namespace: e2e-tests-container-probe-dfm5t, resource: bindings, ignored listing per whitelist Dec 26 11:52:46.705: INFO: namespace e2e-tests-container-probe-dfm5t deletion completed in 8.444138677s • [SLOW TEST:260.321 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:52:46.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:53:14.954: INFO: Container started at 2019-12-26 11:52:54 +0000 UTC, pod became ready at 2019-12-26 11:53:14 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:53:14.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jj42w" for this suite. Dec 26 11:53:39.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:53:39.076: INFO: namespace: e2e-tests-container-probe-jj42w, resource: bindings, ignored listing per whitelist Dec 26 11:53:39.122: INFO: namespace e2e-tests-container-probe-jj42w deletion completed in 24.160246717s • [SLOW TEST:52.417 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:53:39.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-fvdsr/configmap-test-5b164926-27d6-11ea-948a-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 26 11:53:39.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-fvdsr" to be "success or failure" Dec 26 11:53:39.339: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.089886ms Dec 26 11:53:41.575: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265516083s Dec 26 11:53:43.600: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29048202s Dec 26 11:53:45.614: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304500624s Dec 26 11:53:47.622: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312345842s Dec 26 11:53:49.642: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332744669s Dec 26 11:53:51.655: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.345086204s STEP: Saw pod success Dec 26 11:53:51.655: INFO: Pod "pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:53:51.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005 container env-test: STEP: delete the pod Dec 26 11:53:51.742: INFO: Waiting for pod pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:53:51.905: INFO: Pod pod-configmaps-5b17086d-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:53:51.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fvdsr" for this suite. Dec 26 11:53:59.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:54:00.009: INFO: namespace: e2e-tests-configmap-fvdsr, resource: bindings, ignored listing per whitelist Dec 26 11:54:00.145: INFO: namespace e2e-tests-configmap-fvdsr deletion completed in 8.211015228s • [SLOW TEST:21.023 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:54:00.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-7nx86;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7nx86.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.40.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.40.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.40.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.40.175_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-7nx86;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-7nx86;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nx86.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nx86.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7nx86.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.40.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.40.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.40.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.40.175_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 11:54:14.715: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.722: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.731: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nx86 from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.743: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86 from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.750: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.756: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.768: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.777: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.782: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.786: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.790: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.795: INFO: Unable to read 10.96.40.175_udp@PTR from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.800: INFO: Unable to read 10.96.40.175_tcp@PTR from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.804: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.808: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.812: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nx86 from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.817: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nx86 from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.827: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.833: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.838: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.843: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.847: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.853: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.857: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.862: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.872: INFO: Unable to read 10.96.40.175_udp@PTR from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.878: INFO: Unable to read 10.96.40.175_tcp@PTR from pod e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005: the server could not find the requested resource (get pods dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005) Dec 26 11:54:14.878: INFO: Lookups using e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nx86 wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86 wheezy_udp@dns-test-service.e2e-tests-dns-7nx86.svc wheezy_tcp@dns-test-service.e2e-tests-dns-7nx86.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.40.175_udp@PTR 10.96.40.175_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nx86 jessie_tcp@dns-test-service.e2e-tests-dns-7nx86 jessie_udp@dns-test-service.e2e-tests-dns-7nx86.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nx86.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nx86.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nx86.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.40.175_udp@PTR 10.96.40.175_tcp@PTR] Dec 26 11:54:20.211: INFO: DNS probes using e2e-tests-dns-7nx86/dns-test-67b3fbe8-27d6-11ea-948a-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:54:20.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-7nx86" for this suite. Dec 26 11:54:27.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:54:27.146: INFO: namespace: e2e-tests-dns-7nx86, resource: bindings, ignored listing per whitelist Dec 26 11:54:27.263: INFO: namespace e2e-tests-dns-7nx86 deletion completed in 6.406699984s • [SLOW TEST:27.118 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:54:27.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 26 11:54:27.609: INFO: Waiting up to 5m0s for pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-4dgsk" to be "success or failure" Dec 26 11:54:27.631: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.938703ms Dec 26 11:54:29.790: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180957984s Dec 26 11:54:31.810: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200473722s Dec 26 11:54:34.003: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394212391s Dec 26 11:54:36.286: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.677203385s Dec 26 11:54:38.309: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.700059135s STEP: Saw pod success Dec 26 11:54:38.309: INFO: Pod "pod-77e10fcd-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:54:38.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-77e10fcd-27d6-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:54:38.731: INFO: Waiting for pod pod-77e10fcd-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:54:38.766: INFO: Pod pod-77e10fcd-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:54:38.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4dgsk" for this suite. Dec 26 11:54:46.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:54:47.020: INFO: namespace: e2e-tests-emptydir-4dgsk, resource: bindings, ignored listing per whitelist Dec 26 11:54:47.043: INFO: namespace e2e-tests-emptydir-4dgsk deletion completed in 8.261240639s • [SLOW TEST:19.780 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:54:47.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 26 11:54:47.266: INFO: Waiting up to 5m0s for pod "pod-8398b38f-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-n44kd" to be "success or failure" Dec 26 11:54:47.299: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.850824ms Dec 26 11:54:49.887: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621188253s Dec 26 11:54:51.909: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642917681s Dec 26 11:54:54.011: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.744562387s Dec 26 11:54:56.581: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.31517664s Dec 26 11:54:58.615: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.348279689s STEP: Saw pod success Dec 26 11:54:58.615: INFO: Pod "pod-8398b38f-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:54:58.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8398b38f-27d6-11ea-948a-0242ac110005 container test-container: STEP: delete the pod Dec 26 11:54:58.814: INFO: Waiting for pod pod-8398b38f-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:54:58.826: INFO: Pod pod-8398b38f-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:54:58.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n44kd" for this suite. Dec 26 11:55:04.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:55:05.132: INFO: namespace: e2e-tests-emptydir-n44kd, resource: bindings, ignored listing per whitelist Dec 26 11:55:05.161: INFO: namespace e2e-tests-emptydir-n44kd deletion completed in 6.326009277s • [SLOW TEST:18.118 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:55:05.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 11:55:05.527: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 26 11:55:05.558: INFO: Number of nodes with available pods: 0 Dec 26 11:55:05.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:06.632: INFO: Number of nodes with available pods: 0 Dec 26 11:55:06.632: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:07.958: INFO: Number of nodes with available pods: 0 Dec 26 11:55:07.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:08.582: INFO: Number of nodes with available pods: 0 Dec 26 11:55:08.583: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:09.717: INFO: Number of nodes with available pods: 0 Dec 26 11:55:09.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:10.617: INFO: Number of nodes with available pods: 0 Dec 26 11:55:10.617: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:12.049: INFO: Number of nodes with available pods: 0 Dec 26 11:55:12.049: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:12.624: INFO: Number of nodes with available pods: 0 Dec 26 11:55:12.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:13.644: INFO: Number of nodes with available pods: 0 Dec 26 11:55:13.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:14.692: INFO: Number of nodes with available pods: 0 Dec 26 11:55:14.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:15.591: INFO: Number of nodes with available pods: 0 Dec 26 11:55:15.591: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:16.606: INFO: Number of nodes with available pods: 1 Dec 26 11:55:16.607: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 26 11:55:16.715: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:17.744: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:18.798: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:19.775: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:20.936: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:21.740: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:22.856: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:22.856: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:23.746: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:23.746: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:24.744: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:24.744: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:25.740: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:25.740: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:26.740: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:26.740: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:27.742: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:27.742: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:28.738: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:28.738: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:29.742: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:29.742: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:30.743: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:30.743: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:31.742: INFO: Wrong image for pod: daemon-set-67q6s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 11:55:31.742: INFO: Pod daemon-set-67q6s is not available Dec 26 11:55:32.730: INFO: Pod daemon-set-hsn2r is not available Dec 26 11:55:33.891: INFO: Pod daemon-set-hsn2r is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 26 11:55:33.976: INFO: Number of nodes with available pods: 0 Dec 26 11:55:33.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:35.019: INFO: Number of nodes with available pods: 0 Dec 26 11:55:35.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:35.995: INFO: Number of nodes with available pods: 0 Dec 26 11:55:35.995: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:37.005: INFO: Number of nodes with available pods: 0 Dec 26 11:55:37.005: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:38.170: INFO: Number of nodes with available pods: 0 Dec 26 11:55:38.171: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:39.160: INFO: Number of nodes with available pods: 0 Dec 26 11:55:39.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:40.159: INFO: Number of nodes with available pods: 0 Dec 26 11:55:40.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:41.170: INFO: Number of nodes with available pods: 0 Dec 26 11:55:41.170: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 26 11:55:42.100: INFO: Number of nodes with available pods: 1 Dec 26 11:55:42.100: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8b5m7, will wait for the garbage collector to delete the pods Dec 26 11:55:42.211: INFO: Deleting DaemonSet.extensions daemon-set took: 14.428599ms Dec 26 11:55:42.312: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.404117ms Dec 26 11:55:49.618: INFO: Number of nodes with available pods: 0 Dec 26 11:55:49.618: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 11:55:49.621: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8b5m7/daemonsets","resourceVersion":"16121228"},"items":null} Dec 26 11:55:49.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8b5m7/pods","resourceVersion":"16121228"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:55:49.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8b5m7" for this suite. Dec 26 11:55:55.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:55:55.994: INFO: namespace: e2e-tests-daemonsets-8b5m7, resource: bindings, ignored listing per whitelist Dec 26 11:55:56.022: INFO: namespace e2e-tests-daemonsets-8b5m7 deletion completed in 6.205409889s • [SLOW TEST:50.860 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:55:56.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:55:56.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-wj7gv" to be "success or failure" Dec 26 11:55:56.349: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.522665ms Dec 26 11:55:58.502: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171721013s Dec 26 11:56:00.593: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262533807s Dec 26 11:56:02.713: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382734836s Dec 26 11:56:04.731: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401310734s Dec 26 11:56:06.742: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411808337s STEP: Saw pod success Dec 26 11:56:06.742: INFO: Pod "downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:56:06.748: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:56:07.391: INFO: Waiting for pod downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:56:08.068: INFO: Pod downwardapi-volume-acb62319-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:56:08.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wj7gv" for this suite. Dec 26 11:56:14.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:56:14.304: INFO: namespace: e2e-tests-projected-wj7gv, resource: bindings, ignored listing per whitelist Dec 26 11:56:14.438: INFO: namespace e2e-tests-projected-wj7gv deletion completed in 6.33832142s • [SLOW TEST:18.416 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:56:14.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:56:14.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-62zb9" to be "success or failure" Dec 26 11:56:14.728: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.694187ms Dec 26 11:56:16.894: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202000044s Dec 26 11:56:18.909: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216494531s Dec 26 11:56:21.216: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.523737403s Dec 26 11:56:23.255: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562578367s Dec 26 11:56:25.276: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.583246416s Dec 26 11:56:27.288: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.595379419s STEP: Saw pod success Dec 26 11:56:27.288: INFO: Pod "downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:56:27.293: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:56:27.889: INFO: Waiting for pod downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:56:27.898: INFO: Pod downwardapi-volume-b7b4fe54-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:56:27.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-62zb9" for this suite. Dec 26 11:56:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:56:34.309: INFO: namespace: e2e-tests-downward-api-62zb9, resource: bindings, ignored listing per whitelist Dec 26 11:56:34.418: INFO: namespace e2e-tests-downward-api-62zb9 deletion completed in 6.504651352s • [SLOW TEST:19.980 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:56:34.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-c3a29eed-27d6-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:56:46.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-st7zn" for this suite. Dec 26 11:57:10.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:57:10.991: INFO: namespace: e2e-tests-configmap-st7zn, resource: bindings, ignored listing per whitelist Dec 26 11:57:11.111: INFO: namespace e2e-tests-configmap-st7zn deletion completed in 24.290443769s • [SLOW TEST:36.693 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:57:11.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 11:57:11.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-x9tq2" to be "success or failure" Dec 26 11:57:11.504: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.721094ms Dec 26 11:57:13.526: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046333662s Dec 26 11:57:15.543: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063238063s Dec 26 11:57:18.282: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.80203149s Dec 26 11:57:20.295: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.814565634s Dec 26 11:57:22.311: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.83115208s Dec 26 11:57:24.494: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.013458512s STEP: Saw pod success Dec 26 11:57:24.494: INFO: Pod "downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 11:57:24.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 11:57:24.855: INFO: Waiting for pod downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005 to disappear Dec 26 11:57:24.884: INFO: Pod downwardapi-volume-d9853fc4-27d6-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:57:24.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x9tq2" for this suite. Dec 26 11:57:31.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:57:31.151: INFO: namespace: e2e-tests-projected-x9tq2, resource: bindings, ignored listing per whitelist Dec 26 11:57:31.277: INFO: namespace e2e-tests-projected-x9tq2 deletion completed in 6.274650174s • [SLOW TEST:20.166 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:57:31.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-e57230f6-27d6-11ea-948a-0242ac110005 STEP: Creating secret with name s-test-opt-upd-e572315d-27d6-11ea-948a-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e57230f6-27d6-11ea-948a-0242ac110005 STEP: Updating secret s-test-opt-upd-e572315d-27d6-11ea-948a-0242ac110005 STEP: Creating secret with name s-test-opt-create-e572317e-27d6-11ea-948a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:58:53.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nddwf" for this suite. Dec 26 11:59:17.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 11:59:17.979: INFO: namespace: e2e-tests-secrets-nddwf, resource: bindings, ignored listing per whitelist Dec 26 11:59:18.089: INFO: namespace e2e-tests-secrets-nddwf deletion completed in 24.366960727s • [SLOW TEST:106.811 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 11:59:18.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-k2c2m STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-k2c2m STEP: Deleting pre-stop pod Dec 26 11:59:41.785: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 11:59:41.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-k2c2m" for this suite. Dec 26 12:00:24.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:00:24.242: INFO: namespace: e2e-tests-prestop-k2c2m, resource: bindings, ignored listing per whitelist Dec 26 12:00:24.281: INFO: namespace e2e-tests-prestop-k2c2m deletion completed in 42.382045013s • [SLOW TEST:66.192 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 12:00:24.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 26 12:00:24.695: INFO: Waiting up to 5m0s for pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-q6s5g" to be "success or failure" Dec 26 12:00:24.723: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.800031ms Dec 26 12:00:26.750: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055027064s Dec 26 12:00:28.766: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071662218s Dec 26 12:00:30.782: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087000889s Dec 26 12:00:32.795: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100768086s Dec 26 12:00:34.807: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111870557s STEP: Saw pod success Dec 26 12:00:34.807: INFO: Pod "downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 12:00:34.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005 container dapi-container: STEP: delete the pod Dec 26 12:00:34.873: INFO: Waiting for pod downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005 to disappear Dec 26 12:00:34.954: INFO: Pod downward-api-4cb7de5b-27d7-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 12:00:34.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q6s5g" for this suite. Dec 26 12:00:41.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:00:41.170: INFO: namespace: e2e-tests-downward-api-q6s5g, resource: bindings, ignored listing per whitelist Dec 26 12:00:41.187: INFO: namespace e2e-tests-downward-api-q6s5g deletion completed in 6.22079309s • [SLOW TEST:16.906 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 12:00:41.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 12:00:41.430: INFO: Creating ReplicaSet my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005 Dec 26 12:00:41.500: INFO: Pod name my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005: Found 0 pods out of 1 Dec 26 12:00:47.483: INFO: Pod name my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005: Found 1 pods out of 1 Dec 26 12:00:47.483: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005" is running Dec 26 12:00:51.530: INFO: Pod "my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005-nszrt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:00:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:00:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:00:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:00:41 +0000 UTC Reason: Message:}]) Dec 26 12:00:51.530: INFO: Trying to dial the pod Dec 26 12:00:56.644: INFO: Controller my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005-nszrt]: "my-hostname-basic-56b3dffc-27d7-11ea-948a-0242ac110005-nszrt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 12:00:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-h5nkr" for this suite. Dec 26 12:01:04.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:01:04.721: INFO: namespace: e2e-tests-replicaset-h5nkr, resource: bindings, ignored listing per whitelist Dec 26 12:01:04.796: INFO: namespace e2e-tests-replicaset-h5nkr deletion completed in 8.143463482s • [SLOW TEST:23.608 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 12:01:04.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-k859f STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 12:01:06.109: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 12:01:40.320: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-k859f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 12:01:40.320: INFO: >>> kubeConfig: /root/.kube/config Dec 26 12:01:40.916: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 12:01:40.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-k859f" for this suite. Dec 26 12:02:05.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:02:05.206: INFO: namespace: e2e-tests-pod-network-test-k859f, resource: bindings, ignored listing per whitelist Dec 26 12:02:05.258: INFO: namespace e2e-tests-pod-network-test-k859f deletion completed in 24.316993229s • [SLOW TEST:60.462 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 12:02:05.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 26 12:02:05.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-q5sm2" to be "success or failure" Dec 26 12:02:05.481: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.330254ms Dec 26 12:02:07.560: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08878569s Dec 26 12:02:09.577: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10621049s Dec 26 12:02:11.844: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372669596s Dec 26 12:02:13.859: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387744285s Dec 26 12:02:15.930: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.459128394s Dec 26 12:02:17.951: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.480053621s STEP: Saw pod success Dec 26 12:02:17.951: INFO: Pod "downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005" satisfied condition "success or failure" Dec 26 12:02:17.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005 container client-container: STEP: delete the pod Dec 26 12:02:18.430: INFO: Waiting for pod downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005 to disappear Dec 26 12:02:18.657: INFO: Pod downwardapi-volume-88c65a52-27d7-11ea-948a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 26 12:02:18.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q5sm2" for this suite. Dec 26 12:02:24.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:02:24.953: INFO: namespace: e2e-tests-downward-api-q5sm2, resource: bindings, ignored listing per whitelist Dec 26 12:02:25.022: INFO: namespace e2e-tests-downward-api-q5sm2 deletion completed in 6.323782296s • [SLOW TEST:19.763 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 26 12:02:25.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 26 12:02:25.269: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.330666ms)
Dec 26 12:02:25.274: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.904826ms)
Dec 26 12:02:25.280: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.463401ms)
Dec 26 12:02:25.285: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.130449ms)
Dec 26 12:02:25.290: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.425766ms)
Dec 26 12:02:25.296: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.505018ms)
Dec 26 12:02:25.302: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.584923ms)
Dec 26 12:02:25.307: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.40416ms)
Dec 26 12:02:25.312: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.12671ms)
Dec 26 12:02:25.317: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.239469ms)
Dec 26 12:02:25.323: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.990486ms)
Dec 26 12:02:25.331: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.24208ms)
Dec 26 12:02:25.403: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 72.299563ms)
Dec 26 12:02:25.419: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.574447ms)
Dec 26 12:02:25.433: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.86269ms)
Dec 26 12:02:25.440: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.837811ms)
Dec 26 12:02:25.445: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.710311ms)
Dec 26 12:02:25.452: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.049954ms)
Dec 26 12:02:25.458: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.791433ms)
Dec 26 12:02:25.465: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.684474ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:02:25.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-q9rtv" for this suite.
Dec 26 12:02:31.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:02:31.704: INFO: namespace: e2e-tests-proxy-q9rtv, resource: bindings, ignored listing per whitelist
Dec 26 12:02:31.722: INFO: namespace e2e-tests-proxy-q9rtv deletion completed in 6.251623959s

• [SLOW TEST:6.700 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:02:31.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 26 12:02:33.001: INFO: Pod name wrapped-volume-race-9926d3fc-27d7-11ea-948a-0242ac110005: Found 0 pods out of 5
Dec 26 12:02:38.035: INFO: Pod name wrapped-volume-race-9926d3fc-27d7-11ea-948a-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9926d3fc-27d7-11ea-948a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jcgk, will wait for the garbage collector to delete the pods
Dec 26 12:04:52.286: INFO: Deleting ReplicationController wrapped-volume-race-9926d3fc-27d7-11ea-948a-0242ac110005 took: 41.573459ms
Dec 26 12:04:52.686: INFO: Terminating ReplicationController wrapped-volume-race-9926d3fc-27d7-11ea-948a-0242ac110005 pods took: 400.700357ms
STEP: Creating RC which spawns configmap-volume pods
Dec 26 12:05:43.253: INFO: Pod name wrapped-volume-race-0a811a45-27d8-11ea-948a-0242ac110005: Found 0 pods out of 5
Dec 26 12:05:48.287: INFO: Pod name wrapped-volume-race-0a811a45-27d8-11ea-948a-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0a811a45-27d8-11ea-948a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jcgk, will wait for the garbage collector to delete the pods
Dec 26 12:08:12.529: INFO: Deleting ReplicationController wrapped-volume-race-0a811a45-27d8-11ea-948a-0242ac110005 took: 75.281831ms
Dec 26 12:08:12.830: INFO: Terminating ReplicationController wrapped-volume-race-0a811a45-27d8-11ea-948a-0242ac110005 pods took: 300.974982ms
STEP: Creating RC which spawns configmap-volume pods
Dec 26 12:09:03.133: INFO: Pod name wrapped-volume-race-81ae2673-27d8-11ea-948a-0242ac110005: Found 0 pods out of 5
Dec 26 12:09:08.188: INFO: Pod name wrapped-volume-race-81ae2673-27d8-11ea-948a-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-81ae2673-27d8-11ea-948a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jcgk, will wait for the garbage collector to delete the pods
Dec 26 12:11:12.465: INFO: Deleting ReplicationController wrapped-volume-race-81ae2673-27d8-11ea-948a-0242ac110005 took: 119.997416ms
Dec 26 12:11:12.766: INFO: Terminating ReplicationController wrapped-volume-race-81ae2673-27d8-11ea-948a-0242ac110005 pods took: 301.153012ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:12:04.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4jcgk" for this suite.
Dec 26 12:12:14.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:12:14.964: INFO: namespace: e2e-tests-emptydir-wrapper-4jcgk, resource: bindings, ignored listing per whitelist
Dec 26 12:12:15.005: INFO: namespace e2e-tests-emptydir-wrapper-4jcgk deletion completed in 10.300440767s

• [SLOW TEST:583.282 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:12:15.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:12:15.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-4p2l8" to be "success or failure"
Dec 26 12:12:15.403: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 119.834261ms
Dec 26 12:12:17.419: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135445059s
Dec 26 12:12:19.907: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.62332201s
Dec 26 12:12:21.936: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653092927s
Dec 26 12:12:23.966: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.682616496s
Dec 26 12:12:25.976: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.692575589s
Dec 26 12:12:27.988: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704328324s
Dec 26 12:12:30.004: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.720459515s
STEP: Saw pod success
Dec 26 12:12:30.004: INFO: Pod "downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:12:30.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:12:30.123: INFO: Waiting for pod downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005 to disappear
Dec 26 12:12:30.219: INFO: Pod downwardapi-volume-f43f59ac-27d8-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:12:30.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4p2l8" for this suite.
Dec 26 12:12:36.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:12:36.518: INFO: namespace: e2e-tests-downward-api-4p2l8, resource: bindings, ignored listing per whitelist
Dec 26 12:12:36.555: INFO: namespace e2e-tests-downward-api-4p2l8 deletion completed in 6.31146066s

• [SLOW TEST:21.550 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:12:36.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 26 12:12:36.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 26 12:12:37.084: INFO: stderr: ""
Dec 26 12:12:37.084: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:12:37.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hpg5q" for this suite.
Dec 26 12:12:43.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:12:43.172: INFO: namespace: e2e-tests-kubectl-hpg5q, resource: bindings, ignored listing per whitelist
Dec 26 12:12:43.489: INFO: namespace e2e-tests-kubectl-hpg5q deletion completed in 6.389589799s

• [SLOW TEST:6.933 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:12:43.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:12:43.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-t7h7n" for this suite.
Dec 26 12:13:08.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:13:08.322: INFO: namespace: e2e-tests-kubelet-test-t7h7n, resource: bindings, ignored listing per whitelist
Dec 26 12:13:08.329: INFO: namespace e2e-tests-kubelet-test-t7h7n deletion completed in 24.323567279s

• [SLOW TEST:24.840 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:13:08.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 12:13:08.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-wg6cp'
Dec 26 12:13:10.539: INFO: stderr: ""
Dec 26 12:13:10.539: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 26 12:13:10.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wg6cp'
Dec 26 12:13:16.441: INFO: stderr: ""
Dec 26 12:13:16.441: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:13:16.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wg6cp" for this suite.
Dec 26 12:13:22.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:13:22.724: INFO: namespace: e2e-tests-kubectl-wg6cp, resource: bindings, ignored listing per whitelist
Dec 26 12:13:22.779: INFO: namespace e2e-tests-kubectl-wg6cp deletion completed in 6.329513148s

• [SLOW TEST:14.450 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:13:22.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 26 12:13:22.961: INFO: namespace e2e-tests-kubectl-nmknc
Dec 26 12:13:22.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nmknc'
Dec 26 12:13:23.440: INFO: stderr: ""
Dec 26 12:13:23.440: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 26 12:13:25.313: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:25.313: INFO: Found 0 / 1
Dec 26 12:13:25.747: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:25.747: INFO: Found 0 / 1
Dec 26 12:13:26.475: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:26.475: INFO: Found 0 / 1
Dec 26 12:13:27.471: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:27.471: INFO: Found 0 / 1
Dec 26 12:13:28.995: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:28.995: INFO: Found 0 / 1
Dec 26 12:13:29.938: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:29.938: INFO: Found 0 / 1
Dec 26 12:13:30.703: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:30.703: INFO: Found 0 / 1
Dec 26 12:13:31.466: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:31.466: INFO: Found 0 / 1
Dec 26 12:13:32.463: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:32.463: INFO: Found 0 / 1
Dec 26 12:13:33.472: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:33.473: INFO: Found 1 / 1
Dec 26 12:13:33.473: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 26 12:13:33.495: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 12:13:33.495: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 26 12:13:33.495: INFO: wait on redis-master startup in e2e-tests-kubectl-nmknc 
Dec 26 12:13:33.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2w94m redis-master --namespace=e2e-tests-kubectl-nmknc'
Dec 26 12:13:33.759: INFO: stderr: ""
Dec 26 12:13:33.759: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Dec 12:13:31.239 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 12:13:31.240 # Server started, Redis version 3.2.12\n1:M 26 Dec 12:13:31.240 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 12:13:31.240 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 26 12:13:33.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-nmknc'
Dec 26 12:13:34.128: INFO: stderr: ""
Dec 26 12:13:34.128: INFO: stdout: "service/rm2 exposed\n"
Dec 26 12:13:34.146: INFO: Service rm2 in namespace e2e-tests-kubectl-nmknc found.
STEP: exposing service
Dec 26 12:13:36.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-nmknc'
Dec 26 12:13:36.460: INFO: stderr: ""
Dec 26 12:13:36.460: INFO: stdout: "service/rm3 exposed\n"
Dec 26 12:13:36.561: INFO: Service rm3 in namespace e2e-tests-kubectl-nmknc found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:13:38.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nmknc" for this suite.
Dec 26 12:14:04.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:14:04.957: INFO: namespace: e2e-tests-kubectl-nmknc, resource: bindings, ignored listing per whitelist
Dec 26 12:14:05.135: INFO: namespace e2e-tests-kubectl-nmknc deletion completed in 26.533124806s

• [SLOW TEST:42.355 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:14:05.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:15:05.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c5qtk" for this suite.
Dec 26 12:15:29.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:15:29.497: INFO: namespace: e2e-tests-container-probe-c5qtk, resource: bindings, ignored listing per whitelist
Dec 26 12:15:29.613: INFO: namespace e2e-tests-container-probe-c5qtk deletion completed in 24.270992882s

• [SLOW TEST:84.478 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:15:29.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 26 12:15:29.840: INFO: Waiting up to 5m0s for pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005" in namespace "e2e-tests-containers-5557s" to be "success or failure"
Dec 26 12:15:29.969: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 129.547ms
Dec 26 12:15:31.990: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150703345s
Dec 26 12:15:34.017: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176902962s
Dec 26 12:15:36.309: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469377267s
Dec 26 12:15:38.343: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50283651s
Dec 26 12:15:40.356: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.515968565s
STEP: Saw pod success
Dec 26 12:15:40.356: INFO: Pod "client-containers-6838fec5-27d9-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:15:40.364: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6838fec5-27d9-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:15:41.366: INFO: Waiting for pod client-containers-6838fec5-27d9-11ea-948a-0242ac110005 to disappear
Dec 26 12:15:41.576: INFO: Pod client-containers-6838fec5-27d9-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:15:41.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5557s" for this suite.
Dec 26 12:15:47.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:15:47.880: INFO: namespace: e2e-tests-containers-5557s, resource: bindings, ignored listing per whitelist
Dec 26 12:15:47.977: INFO: namespace e2e-tests-containers-5557s deletion completed in 6.378666447s

• [SLOW TEST:18.363 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:15:47.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 26 12:15:48.113: INFO: Waiting up to 5m0s for pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-96nl7" to be "success or failure"
Dec 26 12:15:48.211: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.229944ms
Dec 26 12:15:50.288: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174793592s
Dec 26 12:15:52.334: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221270493s
Dec 26 12:15:54.554: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440639369s
Dec 26 12:15:56.585: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.47199367s
Dec 26 12:15:58.633: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.519748714s
STEP: Saw pod success
Dec 26 12:15:58.633: INFO: Pod "downward-api-731ea123-27d9-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:15:58.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-731ea123-27d9-11ea-948a-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 26 12:15:58.978: INFO: Waiting for pod downward-api-731ea123-27d9-11ea-948a-0242ac110005 to disappear
Dec 26 12:15:58.987: INFO: Pod downward-api-731ea123-27d9-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:15:58.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-96nl7" for this suite.
Dec 26 12:16:05.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:16:05.102: INFO: namespace: e2e-tests-downward-api-96nl7, resource: bindings, ignored listing per whitelist
Dec 26 12:16:05.173: INFO: namespace e2e-tests-downward-api-96nl7 deletion completed in 6.178121998s

• [SLOW TEST:17.196 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:16:05.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-zr628/configmap-test-7d6a1540-27d9-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 12:16:05.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-zr628" to be "success or failure"
Dec 26 12:16:05.403: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.330836ms
Dec 26 12:16:07.648: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258166826s
Dec 26 12:16:09.671: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280627191s
Dec 26 12:16:11.682: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291867658s
Dec 26 12:16:13.744: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353934934s
Dec 26 12:16:15.943: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.55350071s
STEP: Saw pod success
Dec 26 12:16:15.944: INFO: Pod "pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:16:16.287: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005 container env-test: 
STEP: delete the pod
Dec 26 12:16:16.799: INFO: Waiting for pod pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005 to disappear
Dec 26 12:16:16.829: INFO: Pod pod-configmaps-7d6af482-27d9-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:16:16.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zr628" for this suite.
Dec 26 12:16:22.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:16:23.042: INFO: namespace: e2e-tests-configmap-zr628, resource: bindings, ignored listing per whitelist
Dec 26 12:16:23.233: INFO: namespace e2e-tests-configmap-zr628 deletion completed in 6.389477463s

• [SLOW TEST:18.060 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:16:23.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 26 12:16:24.177: INFO: created pod pod-service-account-defaultsa
Dec 26 12:16:24.177: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 26 12:16:24.184: INFO: created pod pod-service-account-mountsa
Dec 26 12:16:24.184: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 26 12:16:24.373: INFO: created pod pod-service-account-nomountsa
Dec 26 12:16:24.373: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 26 12:16:24.468: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 26 12:16:24.469: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 26 12:16:24.602: INFO: created pod pod-service-account-mountsa-mountspec
Dec 26 12:16:24.602: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 26 12:16:25.152: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 26 12:16:25.152: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 26 12:16:25.201: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 26 12:16:25.202: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 26 12:16:27.098: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 26 12:16:27.098: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 26 12:16:27.968: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 26 12:16:27.968: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:16:27.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-hqphd" for this suite.
Dec 26 12:16:57.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:16:57.221: INFO: namespace: e2e-tests-svcaccounts-hqphd, resource: bindings, ignored listing per whitelist
Dec 26 12:16:57.287: INFO: namespace e2e-tests-svcaccounts-hqphd deletion completed in 28.96316934s

• [SLOW TEST:34.053 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:16:57.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 26 12:16:57.589: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:16:57.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8nlpn" for this suite.
Dec 26 12:17:03.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:17:03.933: INFO: namespace: e2e-tests-kubectl-8nlpn, resource: bindings, ignored listing per whitelist
Dec 26 12:17:04.098: INFO: namespace e2e-tests-kubectl-8nlpn deletion completed in 6.33180388s

• [SLOW TEST:6.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:17:04.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-5jmrq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5jmrq to expose endpoints map[]
Dec 26 12:17:04.349: INFO: Get endpoints failed (9.84638ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 26 12:17:05.376: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5jmrq exposes endpoints map[] (1.036519648s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5jmrq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5jmrq to expose endpoints map[pod1:[80]]
Dec 26 12:17:09.629: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.223688886s elapsed, will retry)
Dec 26 12:17:14.953: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5jmrq exposes endpoints map[pod1:[80]] (9.54689205s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5jmrq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5jmrq to expose endpoints map[pod1:[80] pod2:[80]]
Dec 26 12:17:19.939: INFO: Unexpected endpoints: found map[a130556e-27d9-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.954967566s elapsed, will retry)
Dec 26 12:17:25.636: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5jmrq exposes endpoints map[pod1:[80] pod2:[80]] (10.651885785s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-5jmrq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5jmrq to expose endpoints map[pod2:[80]]
Dec 26 12:17:25.867: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5jmrq exposes endpoints map[pod2:[80]] (166.683704ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-5jmrq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5jmrq to expose endpoints map[]
Dec 26 12:17:26.520: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5jmrq exposes endpoints map[] (516.401601ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:17:27.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5jmrq" for this suite.
Dec 26 12:17:51.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:17:52.225: INFO: namespace: e2e-tests-services-5jmrq, resource: bindings, ignored listing per whitelist
Dec 26 12:17:52.250: INFO: namespace e2e-tests-services-5jmrq deletion completed in 24.54380881s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.152 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:17:52.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:18:52.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-4gbx2" for this suite.
Dec 26 12:18:59.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:18:59.536: INFO: namespace: e2e-tests-container-runtime-4gbx2, resource: bindings, ignored listing per whitelist
Dec 26 12:18:59.569: INFO: namespace e2e-tests-container-runtime-4gbx2 deletion completed in 6.752081743s

• [SLOW TEST:67.319 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:18:59.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 12:18:59.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-kngqm'
Dec 26 12:19:00.172: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 26 12:19:00.172: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 26 12:19:02.197: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bfd5w]
Dec 26 12:19:02.197: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bfd5w" in namespace "e2e-tests-kubectl-kngqm" to be "running and ready"
Dec 26 12:19:02.201: INFO: Pod "e2e-test-nginx-rc-bfd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212488ms
Dec 26 12:19:04.212: INFO: Pod "e2e-test-nginx-rc-bfd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015211484s
Dec 26 12:19:06.313: INFO: Pod "e2e-test-nginx-rc-bfd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115737297s
Dec 26 12:19:08.333: INFO: Pod "e2e-test-nginx-rc-bfd5w": Phase="Running", Reason="", readiness=true. Elapsed: 6.135263525s
Dec 26 12:19:08.333: INFO: Pod "e2e-test-nginx-rc-bfd5w" satisfied condition "running and ready"
Dec 26 12:19:08.333: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bfd5w]
Dec 26 12:19:08.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kngqm'
Dec 26 12:19:08.627: INFO: stderr: ""
Dec 26 12:19:08.627: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 26 12:19:08.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kngqm'
Dec 26 12:19:08.816: INFO: stderr: ""
Dec 26 12:19:08.816: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:19:08.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kngqm" for this suite.
Dec 26 12:19:17.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:19:17.236: INFO: namespace: e2e-tests-kubectl-kngqm, resource: bindings, ignored listing per whitelist
Dec 26 12:19:17.332: INFO: namespace e2e-tests-kubectl-kngqm deletion completed in 8.487433979s

• [SLOW TEST:17.762 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:19:17.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 26 12:19:17.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:17.912: INFO: stderr: ""
Dec 26 12:19:17.912: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 12:19:17.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:18.241: INFO: stderr: ""
Dec 26 12:19:18.241: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
Dec 26 12:19:18.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qfg7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:18.367: INFO: stderr: ""
Dec 26 12:19:18.368: INFO: stdout: ""
Dec 26 12:19:18.368: INFO: update-demo-nautilus-qfg7l is created but not running
Dec 26 12:19:23.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:23.539: INFO: stderr: ""
Dec 26 12:19:23.539: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
Dec 26 12:19:23.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qfg7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:23.669: INFO: stderr: ""
Dec 26 12:19:23.669: INFO: stdout: ""
Dec 26 12:19:23.669: INFO: update-demo-nautilus-qfg7l is created but not running
Dec 26 12:19:28.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:28.805: INFO: stderr: ""
Dec 26 12:19:28.805: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
Dec 26 12:19:28.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qfg7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:28.941: INFO: stderr: ""
Dec 26 12:19:28.941: INFO: stdout: ""
Dec 26 12:19:28.941: INFO: update-demo-nautilus-qfg7l is created but not running
Dec 26 12:19:33.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:34.187: INFO: stderr: ""
Dec 26 12:19:34.188: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
Dec 26 12:19:34.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qfg7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:34.308: INFO: stderr: ""
Dec 26 12:19:34.308: INFO: stdout: "true"
Dec 26 12:19:34.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qfg7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:34.426: INFO: stderr: ""
Dec 26 12:19:34.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 12:19:34.426: INFO: validating pod update-demo-nautilus-qfg7l
Dec 26 12:19:34.471: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 12:19:34.471: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 12:19:34.471: INFO: update-demo-nautilus-qfg7l is verified up and running
Dec 26 12:19:34.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:34.593: INFO: stderr: ""
Dec 26 12:19:34.593: INFO: stdout: "true"
Dec 26 12:19:34.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:34.693: INFO: stderr: ""
Dec 26 12:19:34.693: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 12:19:34.693: INFO: validating pod update-demo-nautilus-qq2s8
Dec 26 12:19:34.701: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 12:19:34.701: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 12:19:34.701: INFO: update-demo-nautilus-qq2s8 is verified up and running
STEP: scaling down the replication controller
Dec 26 12:19:34.705: INFO: scanned /root for discovery docs: 
Dec 26 12:19:34.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:36.216: INFO: stderr: ""
Dec 26 12:19:36.217: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 12:19:36.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:36.352: INFO: stderr: ""
Dec 26 12:19:36.352: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 26 12:19:41.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:41.534: INFO: stderr: ""
Dec 26 12:19:41.534: INFO: stdout: "update-demo-nautilus-qfg7l update-demo-nautilus-qq2s8 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 26 12:19:46.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:46.721: INFO: stderr: ""
Dec 26 12:19:46.721: INFO: stdout: "update-demo-nautilus-qq2s8 "
Dec 26 12:19:46.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:46.865: INFO: stderr: ""
Dec 26 12:19:46.865: INFO: stdout: "true"
Dec 26 12:19:46.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:47.023: INFO: stderr: ""
Dec 26 12:19:47.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 12:19:47.023: INFO: validating pod update-demo-nautilus-qq2s8
Dec 26 12:19:47.037: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 12:19:47.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 12:19:47.037: INFO: update-demo-nautilus-qq2s8 is verified up and running
STEP: scaling up the replication controller
Dec 26 12:19:47.041: INFO: scanned /root for discovery docs: 
Dec 26 12:19:47.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:48.829: INFO: stderr: ""
Dec 26 12:19:48.829: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 12:19:48.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:49.043: INFO: stderr: ""
Dec 26 12:19:49.043: INFO: stdout: "update-demo-nautilus-g28kw update-demo-nautilus-qq2s8 "
Dec 26 12:19:49.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g28kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:49.498: INFO: stderr: ""
Dec 26 12:19:49.498: INFO: stdout: ""
Dec 26 12:19:49.498: INFO: update-demo-nautilus-g28kw is created but not running
Dec 26 12:19:54.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:54.728: INFO: stderr: ""
Dec 26 12:19:54.728: INFO: stdout: "update-demo-nautilus-g28kw update-demo-nautilus-qq2s8 "
Dec 26 12:19:54.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g28kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:19:54.867: INFO: stderr: ""
Dec 26 12:19:54.867: INFO: stdout: ""
Dec 26 12:19:54.868: INFO: update-demo-nautilus-g28kw is created but not running
Dec 26 12:19:59.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.052: INFO: stderr: ""
Dec 26 12:20:00.053: INFO: stdout: "update-demo-nautilus-g28kw update-demo-nautilus-qq2s8 "
Dec 26 12:20:00.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g28kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.199: INFO: stderr: ""
Dec 26 12:20:00.199: INFO: stdout: "true"
Dec 26 12:20:00.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g28kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.314: INFO: stderr: ""
Dec 26 12:20:00.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 12:20:00.314: INFO: validating pod update-demo-nautilus-g28kw
Dec 26 12:20:00.327: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 12:20:00.327: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 12:20:00.327: INFO: update-demo-nautilus-g28kw is verified up and running
Dec 26 12:20:00.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.446: INFO: stderr: ""
Dec 26 12:20:00.446: INFO: stdout: "true"
Dec 26 12:20:00.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qq2s8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.573: INFO: stderr: ""
Dec 26 12:20:00.573: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 12:20:00.573: INFO: validating pod update-demo-nautilus-qq2s8
Dec 26 12:20:00.583: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 12:20:00.583: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 12:20:00.583: INFO: update-demo-nautilus-qq2s8 is verified up and running
STEP: using delete to clean up resources
Dec 26 12:20:00.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:00.838: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 26 12:20:00.838: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 26 12:20:00.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xvxr7'
Dec 26 12:20:01.092: INFO: stderr: "No resources found.\n"
Dec 26 12:20:01.092: INFO: stdout: ""
Dec 26 12:20:01.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xvxr7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 26 12:20:01.241: INFO: stderr: ""
Dec 26 12:20:01.241: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:20:01.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xvxr7" for this suite.
Dec 26 12:20:25.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:20:25.430: INFO: namespace: e2e-tests-kubectl-xvxr7, resource: bindings, ignored listing per whitelist
Dec 26 12:20:25.504: INFO: namespace e2e-tests-kubectl-xvxr7 deletion completed in 24.232539835s

• [SLOW TEST:68.172 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:20:25.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:20:25.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-nqs7w" to be "success or failure"
Dec 26 12:20:25.877: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.180767ms
Dec 26 12:20:28.219: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372315163s
Dec 26 12:20:30.233: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385959869s
Dec 26 12:20:32.281: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433621578s
Dec 26 12:20:35.005: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158306989s
Dec 26 12:20:37.020: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.173358097s
STEP: Saw pod success
Dec 26 12:20:37.021: INFO: Pod "downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:20:37.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:20:37.462: INFO: Waiting for pod downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005 to disappear
Dec 26 12:20:37.493: INFO: Pod downwardapi-volume-18a63fd6-27da-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:20:37.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nqs7w" for this suite.
Dec 26 12:20:43.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:20:43.853: INFO: namespace: e2e-tests-downward-api-nqs7w, resource: bindings, ignored listing per whitelist
Dec 26 12:20:43.876: INFO: namespace e2e-tests-downward-api-nqs7w deletion completed in 6.370353742s

• [SLOW TEST:18.372 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:20:43.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 26 12:20:44.200: INFO: Waiting up to 5m0s for pod "pod-2398e8bd-27da-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-lw5jw" to be "success or failure"
Dec 26 12:20:44.264: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.181523ms
Dec 26 12:20:46.429: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229151519s
Dec 26 12:20:48.456: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256469339s
Dec 26 12:20:50.472: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271977941s
Dec 26 12:20:52.669: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469692735s
Dec 26 12:20:54.699: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.499072309s
STEP: Saw pod success
Dec 26 12:20:54.699: INFO: Pod "pod-2398e8bd-27da-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:20:54.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2398e8bd-27da-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:20:55.460: INFO: Waiting for pod pod-2398e8bd-27da-11ea-948a-0242ac110005 to disappear
Dec 26 12:20:55.466: INFO: Pod pod-2398e8bd-27da-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:20:55.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lw5jw" for this suite.
Dec 26 12:21:01.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:21:01.731: INFO: namespace: e2e-tests-emptydir-lw5jw, resource: bindings, ignored listing per whitelist
Dec 26 12:21:01.733: INFO: namespace e2e-tests-emptydir-lw5jw deletion completed in 6.227656797s

• [SLOW TEST:17.856 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:21:01.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 26 12:21:28.161: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:28.161: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:28.732: INFO: Exec stderr: ""
Dec 26 12:21:28.732: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:28.732: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:29.192: INFO: Exec stderr: ""
Dec 26 12:21:29.192: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:29.193: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:29.588: INFO: Exec stderr: ""
Dec 26 12:21:29.588: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:29.588: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:29.965: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 26 12:21:29.966: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:29.966: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:30.360: INFO: Exec stderr: ""
Dec 26 12:21:30.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:30.360: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:30.875: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 26 12:21:30.875: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:30.875: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:31.218: INFO: Exec stderr: ""
Dec 26 12:21:31.218: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:31.218: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:31.837: INFO: Exec stderr: ""
Dec 26 12:21:31.837: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:31.837: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:32.519: INFO: Exec stderr: ""
Dec 26 12:21:32.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bpmnn PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:21:32.519: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:21:32.860: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:21:32.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bpmnn" for this suite.
Dec 26 12:22:28.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:22:29.044: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bpmnn, resource: bindings, ignored listing per whitelist
Dec 26 12:22:29.173: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bpmnn deletion completed in 56.291835101s

• [SLOW TEST:87.440 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:22:29.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:22:41.636: INFO: Waiting up to 5m0s for pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005" in namespace "e2e-tests-pods-2cc7x" to be "success or failure"
Dec 26 12:22:41.653: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.856715ms
Dec 26 12:22:43.680: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043897271s
Dec 26 12:22:45.695: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059201018s
Dec 26 12:22:47.718: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082268701s
Dec 26 12:22:49.795: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15846514s
Dec 26 12:22:51.806: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170014336s
STEP: Saw pod success
Dec 26 12:22:51.806: INFO: Pod "client-envvars-6997cd40-27da-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:22:51.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-6997cd40-27da-11ea-948a-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 26 12:22:52.287: INFO: Waiting for pod client-envvars-6997cd40-27da-11ea-948a-0242ac110005 to disappear
Dec 26 12:22:52.563: INFO: Pod client-envvars-6997cd40-27da-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:22:52.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2cc7x" for this suite.
Dec 26 12:23:44.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:23:44.766: INFO: namespace: e2e-tests-pods-2cc7x, resource: bindings, ignored listing per whitelist
Dec 26 12:23:44.988: INFO: namespace e2e-tests-pods-2cc7x deletion completed in 52.407492603s

• [SLOW TEST:75.815 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:23:44.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-5trn
STEP: Creating a pod to test atomic-volume-subpath
Dec 26 12:23:45.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5trn" in namespace "e2e-tests-subpath-ptml8" to be "success or failure"
Dec 26 12:23:45.191: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360256ms
Dec 26 12:23:47.477: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289872765s
Dec 26 12:23:49.488: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301008249s
Dec 26 12:23:51.717: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.529875537s
Dec 26 12:23:53.735: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548231146s
Dec 26 12:23:55.751: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.564098706s
Dec 26 12:23:58.106: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.919427881s
Dec 26 12:24:00.119: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=true. Elapsed: 14.932608115s
Dec 26 12:24:02.130: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 16.943624755s
Dec 26 12:24:04.147: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 18.95990591s
Dec 26 12:24:06.184: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 20.997363355s
Dec 26 12:24:08.202: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 23.014850046s
Dec 26 12:24:10.213: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 25.026192813s
Dec 26 12:24:12.247: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 27.060390663s
Dec 26 12:24:14.258: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 29.070903073s
Dec 26 12:24:16.278: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 31.091397513s
Dec 26 12:24:18.297: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Running", Reason="", readiness=false. Elapsed: 33.110551891s
Dec 26 12:24:20.525: INFO: Pod "pod-subpath-test-configmap-5trn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.338184268s
STEP: Saw pod success
Dec 26 12:24:20.525: INFO: Pod "pod-subpath-test-configmap-5trn" satisfied condition "success or failure"
Dec 26 12:24:20.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-5trn container test-container-subpath-configmap-5trn: 
STEP: delete the pod
Dec 26 12:24:20.876: INFO: Waiting for pod pod-subpath-test-configmap-5trn to disappear
Dec 26 12:24:20.885: INFO: Pod pod-subpath-test-configmap-5trn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5trn
Dec 26 12:24:20.885: INFO: Deleting pod "pod-subpath-test-configmap-5trn" in namespace "e2e-tests-subpath-ptml8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:24:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ptml8" for this suite.
Dec 26 12:24:28.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:24:29.037: INFO: namespace: e2e-tests-subpath-ptml8, resource: bindings, ignored listing per whitelist
Dec 26 12:24:29.097: INFO: namespace e2e-tests-subpath-ptml8 deletion completed in 8.197203029s

• [SLOW TEST:44.109 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:24:29.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 26 12:24:29.252: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:24:48.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-5rccq" for this suite.
Dec 26 12:24:54.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:24:54.657: INFO: namespace: e2e-tests-init-container-5rccq, resource: bindings, ignored listing per whitelist
Dec 26 12:24:54.900: INFO: namespace e2e-tests-init-container-5rccq deletion completed in 6.48373688s

• [SLOW TEST:25.801 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:24:54.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:24:55.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7nxl6" for this suite.
Dec 26 12:25:01.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:25:01.495: INFO: namespace: e2e-tests-services-7nxl6, resource: bindings, ignored listing per whitelist
Dec 26 12:25:01.576: INFO: namespace e2e-tests-services-7nxl6 deletion completed in 6.385912083s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.676 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:25:01.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:25:01.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-hl4vt" to be "success or failure"
Dec 26 12:25:01.832: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.56334ms
Dec 26 12:25:03.976: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163351862s
Dec 26 12:25:05.991: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178640165s
Dec 26 12:25:08.036: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224254748s
Dec 26 12:25:10.053: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240593585s
Dec 26 12:25:12.064: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.25204579s
STEP: Saw pod success
Dec 26 12:25:12.064: INFO: Pod "downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:25:12.072: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:25:12.702: INFO: Waiting for pod downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005 to disappear
Dec 26 12:25:13.082: INFO: Pod downwardapi-volume-bd23a8ee-27da-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:25:13.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hl4vt" for this suite.
Dec 26 12:25:19.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:25:19.391: INFO: namespace: e2e-tests-downward-api-hl4vt, resource: bindings, ignored listing per whitelist
Dec 26 12:25:19.505: INFO: namespace e2e-tests-downward-api-hl4vt deletion completed in 6.399264099s

• [SLOW TEST:17.929 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:25:19.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 26 12:25:19.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:22.073: INFO: stderr: ""
Dec 26 12:25:22.073: INFO: stdout: "pod/pause created\n"
Dec 26 12:25:22.073: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 26 12:25:22.074: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-fnbrp" to be "running and ready"
Dec 26 12:25:22.104: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 29.975333ms
Dec 26 12:25:24.133: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059287436s
Dec 26 12:25:26.148: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074328945s
Dec 26 12:25:28.223: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148957074s
Dec 26 12:25:30.240: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166665379s
Dec 26 12:25:32.249: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.175331083s
Dec 26 12:25:32.249: INFO: Pod "pause" satisfied condition "running and ready"
Dec 26 12:25:32.249: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 26 12:25:32.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:32.516: INFO: stderr: ""
Dec 26 12:25:32.516: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 26 12:25:32.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:32.671: INFO: stderr: ""
Dec 26 12:25:32.671: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 26 12:25:32.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:32.798: INFO: stderr: ""
Dec 26 12:25:32.798: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 26 12:25:32.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:32.909: INFO: stderr: ""
Dec 26 12:25:32.909: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 26 12:25:32.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:33.055: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 26 12:25:33.055: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 26 12:25:33.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-fnbrp'
Dec 26 12:25:33.175: INFO: stderr: "No resources found.\n"
Dec 26 12:25:33.175: INFO: stdout: ""
Dec 26 12:25:33.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-fnbrp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 26 12:25:33.279: INFO: stderr: ""
Dec 26 12:25:33.279: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:25:33.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fnbrp" for this suite.
Dec 26 12:25:39.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:25:39.511: INFO: namespace: e2e-tests-kubectl-fnbrp, resource: bindings, ignored listing per whitelist
Dec 26 12:25:39.537: INFO: namespace e2e-tests-kubectl-fnbrp deletion completed in 6.246346772s

• [SLOW TEST:20.032 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:25:39.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:25:39.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-pjv75" to be "success or failure"
Dec 26 12:25:39.837: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.885854ms
Dec 26 12:25:41.870: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062396285s
Dec 26 12:25:43.895: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08723472s
Dec 26 12:25:46.518: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.710826753s
Dec 26 12:25:48.539: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731800174s
Dec 26 12:25:50.564: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.756304083s
STEP: Saw pod success
Dec 26 12:25:50.564: INFO: Pod "downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:25:50.574: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:25:51.997: INFO: Waiting for pod downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005 to disappear
Dec 26 12:25:52.034: INFO: Pod downwardapi-volume-d3bdb63c-27da-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:25:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pjv75" for this suite.
Dec 26 12:25:58.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:25:59.219: INFO: namespace: e2e-tests-projected-pjv75, resource: bindings, ignored listing per whitelist
Dec 26 12:25:59.347: INFO: namespace e2e-tests-projected-pjv75 deletion completed in 7.304262895s

• [SLOW TEST:19.809 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:25:59.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 12:25:59.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-sfs9s'
Dec 26 12:25:59.593: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 26 12:25:59.593: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 26 12:26:03.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-sfs9s'
Dec 26 12:26:03.972: INFO: stderr: ""
Dec 26 12:26:03.972: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:26:03.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sfs9s" for this suite.
Dec 26 12:26:10.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:26:10.174: INFO: namespace: e2e-tests-kubectl-sfs9s, resource: bindings, ignored listing per whitelist
Dec 26 12:26:10.183: INFO: namespace e2e-tests-kubectl-sfs9s deletion completed in 6.196013494s

• [SLOW TEST:10.835 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:26:10.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-r7rdz
Dec 26 12:26:20.384: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-r7rdz
STEP: checking the pod's current state and verifying that restartCount is present
Dec 26 12:26:20.391: INFO: Initial restart count of pod liveness-exec is 0
Dec 26 12:27:13.738: INFO: Restart count of pod e2e-tests-container-probe-r7rdz/liveness-exec is now 1 (53.347449891s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:27:13.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-r7rdz" for this suite.
Dec 26 12:27:19.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:27:20.053: INFO: namespace: e2e-tests-container-probe-r7rdz, resource: bindings, ignored listing per whitelist
Dec 26 12:27:20.220: INFO: namespace e2e-tests-container-probe-r7rdz deletion completed in 6.276226823s

• [SLOW TEST:70.037 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:27:20.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 26 12:27:20.389: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix244933649/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:27:20.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z9zl6" for this suite.
Dec 26 12:27:26.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:27:26.683: INFO: namespace: e2e-tests-kubectl-z9zl6, resource: bindings, ignored listing per whitelist
Dec 26 12:27:26.808: INFO: namespace e2e-tests-kubectl-z9zl6 deletion completed in 6.246982051s

• [SLOW TEST:6.587 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:27:26.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 26 12:27:26.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-h6ppz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 26 12:27:38.042: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 26 12:27:38.042: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:27:40.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h6ppz" for this suite.
Dec 26 12:27:46.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:27:46.792: INFO: namespace: e2e-tests-kubectl-h6ppz, resource: bindings, ignored listing per whitelist
Dec 26 12:27:46.841: INFO: namespace e2e-tests-kubectl-h6ppz deletion completed in 6.379083918s

• [SLOW TEST:20.032 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:27:46.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1226 12:27:57.101393       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 12:27:57.101: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:27:57.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-h4k6t" for this suite.
Dec 26 12:28:03.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:28:03.297: INFO: namespace: e2e-tests-gc-h4k6t, resource: bindings, ignored listing per whitelist
Dec 26 12:28:03.313: INFO: namespace e2e-tests-gc-h4k6t deletion completed in 6.20301852s

• [SLOW TEST:16.472 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:28:03.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:28:03.495: INFO: Creating deployment "test-recreate-deployment"
Dec 26 12:28:03.511: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 26 12:28:03.524: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 26 12:28:05.559: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 26 12:28:05.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:28:07.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:28:09.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:28:11.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:28:13.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960083, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:28:15.588: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 26 12:28:15.609: INFO: Updating deployment test-recreate-deployment
Dec 26 12:28:15.609: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 26 12:28:16.382: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-c99v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c99v5/deployments/test-recreate-deployment,UID:2973edc9-27db-11ea-a994-fa163e34d433,ResourceVersion:16125341,Generation:2,CreationTimestamp:2019-12-26 12:28:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-26 12:28:16 +0000 UTC 2019-12-26 12:28:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-26 12:28:16 +0000 UTC 2019-12-26 12:28:03 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 26 12:28:16.398: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-c99v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c99v5/replicasets/test-recreate-deployment-589c4bfd,UID:30d376d0-27db-11ea-a994-fa163e34d433,ResourceVersion:16125337,Generation:1,CreationTimestamp:2019-12-26 12:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2973edc9-27db-11ea-a994-fa163e34d433 0xc00158d40f 0xc00158d490}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 12:28:16.398: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 26 12:28:16.399: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-c99v5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c99v5/replicasets/test-recreate-deployment-5bf7f65dc,UID:2977cd43-27db-11ea-a994-fa163e34d433,ResourceVersion:16125329,Generation:2,CreationTimestamp:2019-12-26 12:28:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2973edc9-27db-11ea-a994-fa163e34d433 0xc00158d550 0xc00158d551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 12:28:16.416: INFO: Pod "test-recreate-deployment-589c4bfd-tbwln" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-tbwln,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-c99v5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-c99v5/pods/test-recreate-deployment-589c4bfd-tbwln,UID:30e5cbc7-27db-11ea-a994-fa163e34d433,ResourceVersion:16125342,Generation:0,CreationTimestamp:2019-12-26 12:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 30d376d0-27db-11ea-a994-fa163e34d433 0xc000b7c59f 0xc000b7c780}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rjjn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rjjn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rjjn6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000b7cd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000b7cd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:28:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:28:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:28:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:28:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-26 12:28:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:28:16.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-c99v5" for this suite.
Dec 26 12:28:24.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:28:24.574: INFO: namespace: e2e-tests-deployment-c99v5, resource: bindings, ignored listing per whitelist
Dec 26 12:28:24.749: INFO: namespace e2e-tests-deployment-c99v5 deletion completed in 8.327947954s

• [SLOW TEST:21.436 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:28:24.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 26 12:28:34.944: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-362f11dc-27db-11ea-948a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-hkcc7", SelfLink:"/api/v1/namespaces/e2e-tests-pods-hkcc7/pods/pod-submit-remove-362f11dc-27db-11ea-948a-0242ac110005", UID:"3630cf5e-27db-11ea-a994-fa163e34d433", ResourceVersion:"16125396", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712960104, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"859711506"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gbsnd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0010887c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gbsnd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d72038), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001addb00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d72090)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d720b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d720b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d720bc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960104, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960113, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960113, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960104, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000a1bc40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000a1bc60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://863deea99a033760d1cfbfff63b3c01b07c36a4edb1a0bfe7473f9ec10422cf7"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:28:42.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hkcc7" for this suite.
Dec 26 12:28:50.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:28:50.859: INFO: namespace: e2e-tests-pods-hkcc7, resource: bindings, ignored listing per whitelist
Dec 26 12:28:50.900: INFO: namespace e2e-tests-pods-hkcc7 deletion completed in 8.225912299s

• [SLOW TEST:26.150 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:28:50.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1226 12:29:05.933627       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 12:29:05.933: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:29:05.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-w6ppr" for this suite.
Dec 26 12:29:30.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:29:30.791: INFO: namespace: e2e-tests-gc-w6ppr, resource: bindings, ignored listing per whitelist
Dec 26 12:29:30.878: INFO: namespace e2e-tests-gc-w6ppr deletion completed in 24.925459162s

• [SLOW TEST:39.978 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:29:30.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 12:29:31.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5hl78'
Dec 26 12:29:31.417: INFO: stderr: ""
Dec 26 12:29:31.417: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 26 12:29:46.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5hl78 -o json'
Dec 26 12:29:46.703: INFO: stderr: ""
Dec 26 12:29:46.703: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-26T12:29:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-5hl78\",\n        \"resourceVersion\": \"16125620\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-5hl78/pods/e2e-test-nginx-pod\",\n        \"uid\": \"5dd4a5f8-27db-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ptft6\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ptft6\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ptft6\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-26T12:29:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-26T12:29:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-26T12:29:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-26T12:29:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://ef5204844a5eef8e0b5220569cc919ea1bca38c472c3c328f45193ab1654a062\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-26T12:29:41Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-26T12:29:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 26 12:29:46.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-5hl78'
Dec 26 12:29:47.144: INFO: stderr: ""
Dec 26 12:29:47.145: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 26 12:29:47.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5hl78'
Dec 26 12:29:56.030: INFO: stderr: ""
Dec 26 12:29:56.030: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:29:56.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5hl78" for this suite.
Dec 26 12:30:02.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:30:02.156: INFO: namespace: e2e-tests-kubectl-5hl78, resource: bindings, ignored listing per whitelist
Dec 26 12:30:02.224: INFO: namespace e2e-tests-kubectl-5hl78 deletion completed in 6.163411062s

• [SLOW TEST:31.345 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:30:02.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-705d89ad-27db-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:30:02.515: INFO: Waiting up to 5m0s for pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-lxg4k" to be "success or failure"
Dec 26 12:30:02.557: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.415902ms
Dec 26 12:30:04.597: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08174283s
Dec 26 12:30:06.611: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096021068s
Dec 26 12:30:08.642: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126536515s
Dec 26 12:30:10.948: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432439407s
Dec 26 12:30:13.196: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681329515s
STEP: Saw pod success
Dec 26 12:30:13.197: INFO: Pod "pod-secrets-706083ef-27db-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:30:13.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-706083ef-27db-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 12:30:13.525: INFO: Waiting for pod pod-secrets-706083ef-27db-11ea-948a-0242ac110005 to disappear
Dec 26 12:30:13.554: INFO: Pod pod-secrets-706083ef-27db-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:30:13.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lxg4k" for this suite.
Dec 26 12:30:19.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:30:19.906: INFO: namespace: e2e-tests-secrets-lxg4k, resource: bindings, ignored listing per whitelist
Dec 26 12:30:19.910: INFO: namespace e2e-tests-secrets-lxg4k deletion completed in 6.33802843s

• [SLOW TEST:17.685 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:30:19.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-s9p7
STEP: Creating a pod to test atomic-volume-subpath
Dec 26 12:30:20.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-s9p7" in namespace "e2e-tests-subpath-fcbnc" to be "success or failure"
Dec 26 12:30:20.598: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.583947ms
Dec 26 12:30:22.646: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075348032s
Dec 26 12:30:24.660: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089414997s
Dec 26 12:30:27.353: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782397947s
Dec 26 12:30:29.368: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797188677s
Dec 26 12:30:31.391: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.82017019s
Dec 26 12:30:33.638: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.06696412s
Dec 26 12:30:35.660: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.088857569s
Dec 26 12:30:37.673: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 17.101635026s
Dec 26 12:30:39.693: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 19.12161911s
Dec 26 12:30:41.704: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 21.133277231s
Dec 26 12:30:43.722: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 23.151538664s
Dec 26 12:30:45.739: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 25.167676358s
Dec 26 12:30:47.759: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 27.188450863s
Dec 26 12:30:49.786: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 29.21506313s
Dec 26 12:30:51.878: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 31.306871437s
Dec 26 12:30:53.896: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Running", Reason="", readiness=false. Elapsed: 33.324984902s
Dec 26 12:30:55.913: INFO: Pod "pod-subpath-test-projected-s9p7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.341817173s
STEP: Saw pod success
Dec 26 12:30:55.913: INFO: Pod "pod-subpath-test-projected-s9p7" satisfied condition "success or failure"
Dec 26 12:30:55.918: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-s9p7 container test-container-subpath-projected-s9p7: 
STEP: delete the pod
Dec 26 12:30:56.433: INFO: Waiting for pod pod-subpath-test-projected-s9p7 to disappear
Dec 26 12:30:56.460: INFO: Pod pod-subpath-test-projected-s9p7 no longer exists
STEP: Deleting pod pod-subpath-test-projected-s9p7
Dec 26 12:30:56.460: INFO: Deleting pod "pod-subpath-test-projected-s9p7" in namespace "e2e-tests-subpath-fcbnc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:30:56.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fcbnc" for this suite.
Dec 26 12:31:02.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:31:02.940: INFO: namespace: e2e-tests-subpath-fcbnc, resource: bindings, ignored listing per whitelist
Dec 26 12:31:02.948: INFO: namespace e2e-tests-subpath-fcbnc deletion completed in 6.422889222s

• [SLOW TEST:43.039 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:31:02.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9496da1f-27db-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:31:03.268: INFO: Waiting up to 5m0s for pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-zg2hv" to be "success or failure"
Dec 26 12:31:03.277: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.126047ms
Dec 26 12:31:05.352: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084761174s
Dec 26 12:31:07.750: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482554211s
Dec 26 12:31:10.131: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.862987161s
Dec 26 12:31:12.162: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894068156s
Dec 26 12:31:14.177: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.909611929s
Dec 26 12:31:16.257: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.989280491s
STEP: Saw pod success
Dec 26 12:31:16.257: INFO: Pod "pod-secrets-949817b5-27db-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:31:16.267: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-949817b5-27db-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 12:31:16.341: INFO: Waiting for pod pod-secrets-949817b5-27db-11ea-948a-0242ac110005 to disappear
Dec 26 12:31:16.347: INFO: Pod pod-secrets-949817b5-27db-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:31:16.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zg2hv" for this suite.
Dec 26 12:31:22.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:31:22.642: INFO: namespace: e2e-tests-secrets-zg2hv, resource: bindings, ignored listing per whitelist
Dec 26 12:31:22.759: INFO: namespace e2e-tests-secrets-zg2hv deletion completed in 6.361547322s

• [SLOW TEST:19.811 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:31:22.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 26 12:31:23.328: INFO: Waiting up to 5m0s for pod "pod-a08b7937-27db-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-jwdrt" to be "success or failure"
Dec 26 12:31:23.338: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.687743ms
Dec 26 12:31:25.371: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042768414s
Dec 26 12:31:27.388: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059501532s
Dec 26 12:31:29.403: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075213703s
Dec 26 12:31:31.434: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105416165s
Dec 26 12:31:33.457: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129150306s
STEP: Saw pod success
Dec 26 12:31:33.457: INFO: Pod "pod-a08b7937-27db-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:31:33.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a08b7937-27db-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:31:34.683: INFO: Waiting for pod pod-a08b7937-27db-11ea-948a-0242ac110005 to disappear
Dec 26 12:31:34.840: INFO: Pod pod-a08b7937-27db-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:31:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jwdrt" for this suite.
Dec 26 12:31:41.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:31:41.197: INFO: namespace: e2e-tests-emptydir-jwdrt, resource: bindings, ignored listing per whitelist
Dec 26 12:31:41.253: INFO: namespace e2e-tests-emptydir-jwdrt deletion completed in 6.339421503s

• [SLOW TEST:18.493 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:31:41.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:31:41.475: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 26 12:31:46.621: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 26 12:31:50.713: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 26 12:31:52.727: INFO: Creating deployment "test-rollover-deployment"
Dec 26 12:31:52.824: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 26 12:31:54.894: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 26 12:31:54.911: INFO: Ensure that both replica sets have 1 created replica
Dec 26 12:31:54.921: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 26 12:31:54.937: INFO: Updating deployment test-rollover-deployment
Dec 26 12:31:54.937: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 26 12:31:58.238: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 26 12:31:58.264: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 26 12:31:58.276: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:31:58.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:00.319: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:00.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:02.360: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:02.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:04.550: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:04.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:06.298: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:06.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:08.300: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:08.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:10.297: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:10.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960328, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:12.296: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:12.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960328, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:14.303: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:14.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960328, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:16.319: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:16.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960328, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:18.299: INFO: all replica sets need to contain the pod-template-hash label
Dec 26 12:32:18.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960328, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712960313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:32:20.299: INFO: 
Dec 26 12:32:20.299: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 26 12:32:20.315: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-f8zcz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8zcz/deployments/test-rollover-deployment,UID:b2160928-27db-11ea-a994-fa163e34d433,ResourceVersion:16126007,Generation:2,CreationTimestamp:2019-12-26 12:31:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-26 12:31:53 +0000 UTC 2019-12-26 12:31:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-26 12:32:19 +0000 UTC 2019-12-26 12:31:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 26 12:32:20.321: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-f8zcz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8zcz/replicasets/test-rollover-deployment-5b8479fdb6,UID:b368a264-27db-11ea-a994-fa163e34d433,ResourceVersion:16125998,Generation:2,CreationTimestamp:2019-12-26 12:31:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b2160928-27db-11ea-a994-fa163e34d433 0xc000a0d687 0xc000a0d688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 26 12:32:20.321: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 26 12:32:20.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-f8zcz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8zcz/replicasets/test-rollover-controller,UID:ab53566f-27db-11ea-a994-fa163e34d433,ResourceVersion:16126006,Generation:2,CreationTimestamp:2019-12-26 12:31:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b2160928-27db-11ea-a994-fa163e34d433 0xc000a0d497 0xc000a0d498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 12:32:20.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-f8zcz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8zcz/replicasets/test-rollover-deployment-58494b7559,UID:b23149fe-27db-11ea-a994-fa163e34d433,ResourceVersion:16125960,Generation:2,CreationTimestamp:2019-12-26 12:31:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b2160928-27db-11ea-a994-fa163e34d433 0xc000a0d597 0xc000a0d598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 12:32:20.334: INFO: Pod "test-rollover-deployment-5b8479fdb6-hscnl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-hscnl,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-f8zcz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-f8zcz/pods/test-rollover-deployment-5b8479fdb6-hscnl,UID:b464be7e-27db-11ea-a994-fa163e34d433,ResourceVersion:16125983,Generation:0,CreationTimestamp:2019-12-26 12:31:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 b368a264-27db-11ea-a994-fa163e34d433 0xc000e1ed57 0xc000e1ed58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-brpml {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-brpml,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-brpml true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e1eec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e1ef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:31:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:32:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:32:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:31:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-26 12:31:57 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-26 12:32:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://61ecbb38747c7dbef80556a5c54b5498464f9913a92d43294442f1a0c0b590ea}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:32:20.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-f8zcz" for this suite.
Dec 26 12:32:28.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:32:28.802: INFO: namespace: e2e-tests-deployment-f8zcz, resource: bindings, ignored listing per whitelist
Dec 26 12:32:28.802: INFO: namespace e2e-tests-deployment-f8zcz deletion completed in 8.446286081s

• [SLOW TEST:47.548 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:32:28.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 26 12:32:41.424: INFO: Pod pod-hostip-c7b8a7d1-27db-11ea-948a-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:32:41.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kb8rc" for this suite.
Dec 26 12:33:05.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:33:05.716: INFO: namespace: e2e-tests-pods-kb8rc, resource: bindings, ignored listing per whitelist
Dec 26 12:33:05.716: INFO: namespace e2e-tests-pods-kb8rc deletion completed in 24.284503264s

• [SLOW TEST:36.914 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:33:05.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-727wm
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 26 12:33:05.964: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 26 12:33:38.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-727wm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 26 12:33:38.449: INFO: >>> kubeConfig: /root/.kube/config
Dec 26 12:33:39.108: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:33:39.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-727wm" for this suite.
Dec 26 12:34:03.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:34:03.515: INFO: namespace: e2e-tests-pod-network-test-727wm, resource: bindings, ignored listing per whitelist
Dec 26 12:34:04.189: INFO: namespace e2e-tests-pod-network-test-727wm deletion completed in 25.063357298s

• [SLOW TEST:58.473 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:34:04.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 26 12:34:22.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:22.668: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:24.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:24.679: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:26.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:26.683: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:28.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:28.685: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:30.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:30.684: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:32.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:32.704: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:34.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:34.712: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:36.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:36.689: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:38.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:38.682: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:40.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:40.686: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:42.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:42.696: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:44.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:44.691: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:46.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:46.696: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:48.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:48.690: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:50.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:50.699: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 26 12:34:52.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 26 12:34:52.682: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:34:52.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-q989g" for this suite.
Dec 26 12:35:16.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:35:16.871: INFO: namespace: e2e-tests-container-lifecycle-hook-q989g, resource: bindings, ignored listing per whitelist
Dec 26 12:35:16.947: INFO: namespace e2e-tests-container-lifecycle-hook-q989g deletion completed in 24.235921754s

• [SLOW TEST:72.757 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:35:16.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:35:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-pj4ns" for this suite.
Dec 26 12:35:29.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:35:29.884: INFO: namespace: e2e-tests-namespaces-pj4ns, resource: bindings, ignored listing per whitelist
Dec 26 12:35:29.964: INFO: namespace e2e-tests-namespaces-pj4ns deletion completed in 6.267733641s
STEP: Destroying namespace "e2e-tests-nsdeletetest-t2664" for this suite.
Dec 26 12:35:29.968: INFO: Namespace e2e-tests-nsdeletetest-t2664 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-jpcgj" for this suite.
Dec 26 12:35:36.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:35:36.271: INFO: namespace: e2e-tests-nsdeletetest-jpcgj, resource: bindings, ignored listing per whitelist
Dec 26 12:35:36.442: INFO: namespace e2e-tests-nsdeletetest-jpcgj deletion completed in 6.474542373s

• [SLOW TEST:19.496 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:35:36.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:35:36.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-x2jmc" to be "success or failure"
Dec 26 12:35:36.872: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 110.93314ms
Dec 26 12:35:38.912: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15142206s
Dec 26 12:35:40.927: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166644013s
Dec 26 12:35:43.030: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269530295s
Dec 26 12:35:45.042: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.281141857s
Dec 26 12:35:47.058: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297034527s
STEP: Saw pod success
Dec 26 12:35:47.058: INFO: Pod "downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:35:47.062: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:35:47.968: INFO: Waiting for pod downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:35:47.987: INFO: Pod downwardapi-volume-3798a264-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:35:47.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x2jmc" for this suite.
Dec 26 12:35:56.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:35:56.264: INFO: namespace: e2e-tests-projected-x2jmc, resource: bindings, ignored listing per whitelist
Dec 26 12:35:56.330: INFO: namespace e2e-tests-projected-x2jmc deletion completed in 8.330885009s

• [SLOW TEST:19.887 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:35:56.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 26 12:35:56.720: INFO: Waiting up to 5m0s for pod "pod-4372a453-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-wgtbk" to be "success or failure"
Dec 26 12:35:56.738: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.640915ms
Dec 26 12:35:58.749: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028742255s
Dec 26 12:36:00.767: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046291792s
Dec 26 12:36:02.791: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070087245s
Dec 26 12:36:04.957: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236741064s
Dec 26 12:36:06.996: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274983044s
Dec 26 12:36:09.013: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.292906752s
STEP: Saw pod success
Dec 26 12:36:09.014: INFO: Pod "pod-4372a453-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:36:09.028: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4372a453-27dc-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:36:09.107: INFO: Waiting for pod pod-4372a453-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:36:09.140: INFO: Pod pod-4372a453-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:36:09.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wgtbk" for this suite.
Dec 26 12:36:15.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:36:15.532: INFO: namespace: e2e-tests-emptydir-wgtbk, resource: bindings, ignored listing per whitelist
Dec 26 12:36:15.556: INFO: namespace e2e-tests-emptydir-wgtbk deletion completed in 6.40259718s

• [SLOW TEST:19.226 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:36:15.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 26 12:36:15.862: INFO: Waiting up to 5m0s for pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-8d9gl" to be "success or failure"
Dec 26 12:36:15.873: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.915209ms
Dec 26 12:36:18.248: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385421136s
Dec 26 12:36:20.270: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407776243s
Dec 26 12:36:22.356: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493440941s
Dec 26 12:36:24.592: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729627059s
Dec 26 12:36:26.666: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.803795872s
STEP: Saw pod success
Dec 26 12:36:26.666: INFO: Pod "pod-4ee855d0-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:36:26.678: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ee855d0-27dc-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:36:26.920: INFO: Waiting for pod pod-4ee855d0-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:36:26.943: INFO: Pod pod-4ee855d0-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:36:26.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8d9gl" for this suite.
Dec 26 12:36:32.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:36:33.096: INFO: namespace: e2e-tests-emptydir-8d9gl, resource: bindings, ignored listing per whitelist
Dec 26 12:36:33.107: INFO: namespace e2e-tests-emptydir-8d9gl deletion completed in 6.156195639s

• [SLOW TEST:17.550 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:36:33.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:36:33.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-25cbm" to be "success or failure"
Dec 26 12:36:33.421: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.17692ms
Dec 26 12:36:35.446: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050123563s
Dec 26 12:36:37.558: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163009998s
Dec 26 12:36:40.251: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855305667s
Dec 26 12:36:42.271: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875985432s
Dec 26 12:36:44.320: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924985947s
Dec 26 12:36:46.332: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.936729689s
STEP: Saw pod success
Dec 26 12:36:46.332: INFO: Pod "downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:36:46.336: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:36:47.519: INFO: Waiting for pod downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:36:47.923: INFO: Pod downwardapi-volume-595d0995-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:36:47.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-25cbm" for this suite.
Dec 26 12:36:54.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:36:54.226: INFO: namespace: e2e-tests-projected-25cbm, resource: bindings, ignored listing per whitelist
Dec 26 12:36:54.353: INFO: namespace e2e-tests-projected-25cbm deletion completed in 6.394524254s

• [SLOW TEST:21.245 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:36:54.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1226 12:37:36.283608       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 12:37:36.283: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:37:36.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-98rh9" for this suite.
Dec 26 12:37:58.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:37:58.583: INFO: namespace: e2e-tests-gc-98rh9, resource: bindings, ignored listing per whitelist
Dec 26 12:37:58.657: INFO: namespace e2e-tests-gc-98rh9 deletion completed in 22.363371519s

• [SLOW TEST:64.304 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:37:58.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 26 12:37:58.859: INFO: Waiting up to 5m0s for pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-rmt9g" to be "success or failure"
Dec 26 12:37:58.891: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.584729ms
Dec 26 12:38:01.119: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259026612s
Dec 26 12:38:03.130: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270577719s
Dec 26 12:38:05.343: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483390447s
Dec 26 12:38:07.355: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495567683s
Dec 26 12:38:09.393: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.533366769s
STEP: Saw pod success
Dec 26 12:38:09.393: INFO: Pod "pod-8c4dc32c-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:38:09.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8c4dc32c-27dc-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:38:09.506: INFO: Waiting for pod pod-8c4dc32c-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:38:09.516: INFO: Pod pod-8c4dc32c-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:38:09.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rmt9g" for this suite.
Dec 26 12:38:15.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:38:15.873: INFO: namespace: e2e-tests-emptydir-rmt9g, resource: bindings, ignored listing per whitelist
Dec 26 12:38:15.921: INFO: namespace e2e-tests-emptydir-rmt9g deletion completed in 6.396170062s

• [SLOW TEST:17.263 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:38:15.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005
Dec 26 12:38:16.230: INFO: Pod name my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005: Found 0 pods out of 1
Dec 26 12:38:21.286: INFO: Pod name my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005: Found 1 pods out of 1
Dec 26 12:38:21.286: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005" are running
Dec 26 12:38:27.312: INFO: Pod "my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005-skch2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:38:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:38:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:38:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 12:38:16 +0000 UTC Reason: Message:}])
Dec 26 12:38:27.312: INFO: Trying to dial the pod
Dec 26 12:38:32.445: INFO: Controller my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005-skch2]: "my-hostname-basic-96a8111f-27dc-11ea-948a-0242ac110005-skch2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:38:32.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-rqbld" for this suite.
Dec 26 12:38:40.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:38:40.701: INFO: namespace: e2e-tests-replication-controller-rqbld, resource: bindings, ignored listing per whitelist
Dec 26 12:38:41.261: INFO: namespace e2e-tests-replication-controller-rqbld deletion completed in 8.774829277s

• [SLOW TEST:25.340 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:38:41.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 26 12:38:42.201: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cszc6,SelfLink:/api/v1/namespaces/e2e-tests-watch-cszc6/configmaps/e2e-watch-test-watch-closed,UID:a613e8ef-27dc-11ea-a994-fa163e34d433,ResourceVersion:16126963,Generation:0,CreationTimestamp:2019-12-26 12:38:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 26 12:38:42.201: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cszc6,SelfLink:/api/v1/namespaces/e2e-tests-watch-cszc6/configmaps/e2e-watch-test-watch-closed,UID:a613e8ef-27dc-11ea-a994-fa163e34d433,ResourceVersion:16126964,Generation:0,CreationTimestamp:2019-12-26 12:38:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 26 12:38:42.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cszc6,SelfLink:/api/v1/namespaces/e2e-tests-watch-cszc6/configmaps/e2e-watch-test-watch-closed,UID:a613e8ef-27dc-11ea-a994-fa163e34d433,ResourceVersion:16126965,Generation:0,CreationTimestamp:2019-12-26 12:38:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 26 12:38:42.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cszc6,SelfLink:/api/v1/namespaces/e2e-tests-watch-cszc6/configmaps/e2e-watch-test-watch-closed,UID:a613e8ef-27dc-11ea-a994-fa163e34d433,ResourceVersion:16126966,Generation:0,CreationTimestamp:2019-12-26 12:38:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:38:42.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-cszc6" for this suite.
Dec 26 12:38:50.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:38:50.441: INFO: namespace: e2e-tests-watch-cszc6, resource: bindings, ignored listing per whitelist
Dec 26 12:38:50.522: INFO: namespace e2e-tests-watch-cszc6 deletion completed in 8.274038347s

• [SLOW TEST:9.261 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:38:50.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ab4411e6-27dc-11ea-948a-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ab4411e6-27dc-11ea-948a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:39:05.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-946l8" for this suite.
Dec 26 12:39:29.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:39:29.168: INFO: namespace: e2e-tests-projected-946l8, resource: bindings, ignored listing per whitelist
Dec 26 12:39:29.217: INFO: namespace e2e-tests-projected-946l8 deletion completed in 24.180622697s

• [SLOW TEST:38.694 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:39:29.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 26 12:39:29.489: INFO: Waiting up to 5m0s for pod "pod-c24e845b-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-jp6xr" to be "success or failure"
Dec 26 12:39:29.574: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.187741ms
Dec 26 12:39:31.584: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09498927s
Dec 26 12:39:33.604: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114731859s
Dec 26 12:39:35.998: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509003907s
Dec 26 12:39:38.789: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.300266142s
Dec 26 12:39:40.845: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.355436335s
STEP: Saw pod success
Dec 26 12:39:40.845: INFO: Pod "pod-c24e845b-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:39:40.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c24e845b-27dc-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:39:41.180: INFO: Waiting for pod pod-c24e845b-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:39:41.195: INFO: Pod pod-c24e845b-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:39:41.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jp6xr" for this suite.
Dec 26 12:39:47.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:39:47.364: INFO: namespace: e2e-tests-emptydir-jp6xr, resource: bindings, ignored listing per whitelist
Dec 26 12:39:47.397: INFO: namespace e2e-tests-emptydir-jp6xr deletion completed in 6.194109251s

• [SLOW TEST:18.180 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:39:47.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 26 12:40:00.308: INFO: Successfully updated pod "annotationupdatecd2b6c2e-27dc-11ea-948a-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:40:02.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g677p" for this suite.
Dec 26 12:40:29.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:40:29.178: INFO: namespace: e2e-tests-downward-api-g677p, resource: bindings, ignored listing per whitelist
Dec 26 12:40:29.268: INFO: namespace e2e-tests-downward-api-g677p deletion completed in 26.58183058s

• [SLOW TEST:41.870 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:40:29.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:40:29.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-kgcj4" to be "success or failure"
Dec 26 12:40:29.530: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.375119ms
Dec 26 12:40:31.552: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046495618s
Dec 26 12:40:33.569: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063195338s
Dec 26 12:40:35.873: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367416604s
Dec 26 12:40:37.909: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.403049578s
Dec 26 12:40:39.930: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.423960483s
Dec 26 12:40:41.964: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.458498909s
STEP: Saw pod success
Dec 26 12:40:41.964: INFO: Pod "downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:40:41.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:40:42.081: INFO: Waiting for pod downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:40:42.090: INFO: Pod downwardapi-volume-e616b456-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:40:42.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kgcj4" for this suite.
Dec 26 12:40:48.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:40:48.194: INFO: namespace: e2e-tests-downward-api-kgcj4, resource: bindings, ignored listing per whitelist
Dec 26 12:40:48.299: INFO: namespace e2e-tests-downward-api-kgcj4 deletion completed in 6.200386498s

• [SLOW TEST:19.031 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:40:48.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 26 12:40:48.751: INFO: Waiting up to 5m0s for pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005" in namespace "e2e-tests-containers-zh7bk" to be "success or failure"
Dec 26 12:40:48.779: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.863217ms
Dec 26 12:40:50.810: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058268676s
Dec 26 12:40:52.850: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09804363s
Dec 26 12:40:55.155: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403637806s
Dec 26 12:40:57.179: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427237698s
Dec 26 12:40:59.198: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.446685994s
STEP: Saw pod success
Dec 26 12:40:59.198: INFO: Pod "client-containers-f1912792-27dc-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:40:59.203: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f1912792-27dc-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:40:59.371: INFO: Waiting for pod client-containers-f1912792-27dc-11ea-948a-0242ac110005 to disappear
Dec 26 12:40:59.471: INFO: Pod client-containers-f1912792-27dc-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:40:59.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zh7bk" for this suite.
Dec 26 12:41:05.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:41:05.724: INFO: namespace: e2e-tests-containers-zh7bk, resource: bindings, ignored listing per whitelist
Dec 26 12:41:05.842: INFO: namespace e2e-tests-containers-zh7bk deletion completed in 6.360161948s

• [SLOW TEST:17.543 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:41:05.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:41:18.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-f64vv" for this suite.
Dec 26 12:41:24.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:41:24.451: INFO: namespace: e2e-tests-kubelet-test-f64vv, resource: bindings, ignored listing per whitelist
Dec 26 12:41:24.459: INFO: namespace e2e-tests-kubelet-test-f64vv deletion completed in 6.27535946s

• [SLOW TEST:18.617 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:41:24.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:41:24.802: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.391915ms)
Dec 26 12:41:24.819: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.183258ms)
Dec 26 12:41:24.830: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.043382ms)
Dec 26 12:41:24.838: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.819832ms)
Dec 26 12:41:24.848: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.658159ms)
Dec 26 12:41:24.855: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.127775ms)
Dec 26 12:41:24.864: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.991612ms)
Dec 26 12:41:24.875: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.883066ms)
Dec 26 12:41:24.884: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.228236ms)
Dec 26 12:41:24.893: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.97353ms)
Dec 26 12:41:24.899: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.787613ms)
Dec 26 12:41:24.911: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.376806ms)
Dec 26 12:41:24.951: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.017268ms)
Dec 26 12:41:24.957: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.327831ms)
Dec 26 12:41:24.961: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.945175ms)
Dec 26 12:41:24.966: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.117985ms)
Dec 26 12:41:24.970: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.609262ms)
Dec 26 12:41:24.974: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.001288ms)
Dec 26 12:41:25.033: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.240448ms)
Dec 26 12:41:25.043: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.024013ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:41:25.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-x6gch" for this suite.
Dec 26 12:41:31.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:41:31.255: INFO: namespace: e2e-tests-proxy-x6gch, resource: bindings, ignored listing per whitelist
Dec 26 12:41:31.270: INFO: namespace e2e-tests-proxy-x6gch deletion completed in 6.221363965s

• [SLOW TEST:6.810 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:41:31.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-0b0e6696-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:41:31.537: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-9t68n" to be "success or failure"
Dec 26 12:41:31.591: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 54.209805ms
Dec 26 12:41:33.624: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086736304s
Dec 26 12:41:35.640: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102778441s
Dec 26 12:41:37.844: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30739913s
Dec 26 12:41:39.868: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330702638s
Dec 26 12:41:41.897: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.359742177s
STEP: Saw pod success
Dec 26 12:41:41.897: INFO: Pod "pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:41:41.909: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 12:41:42.849: INFO: Waiting for pod pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:41:43.069: INFO: Pod pod-projected-secrets-0b0fa42b-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:41:43.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9t68n" for this suite.
Dec 26 12:41:49.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:41:49.345: INFO: namespace: e2e-tests-projected-9t68n, resource: bindings, ignored listing per whitelist
Dec 26 12:41:49.422: INFO: namespace e2e-tests-projected-9t68n deletion completed in 6.33018122s

• [SLOW TEST:18.151 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:41:49.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:41:49.738: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:41:50.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-hbt92" for this suite.
Dec 26 12:41:59.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:41:59.057: INFO: namespace: e2e-tests-custom-resource-definition-hbt92, resource: bindings, ignored listing per whitelist
Dec 26 12:41:59.158: INFO: namespace e2e-tests-custom-resource-definition-hbt92 deletion completed in 8.187183167s

• [SLOW TEST:9.736 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:41:59.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 26 12:41:59.304: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 26 12:41:59.314: INFO: Waiting for terminating namespaces to be deleted...
Dec 26 12:41:59.319: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 26 12:41:59.333: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 26 12:41:59.333: INFO: 	Container coredns ready: true, restart count 0
Dec 26 12:41:59.333: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 26 12:41:59.333: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 26 12:41:59.333: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 26 12:41:59.333: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 26 12:41:59.333: INFO: 	Container weave ready: true, restart count 0
Dec 26 12:41:59.333: INFO: 	Container weave-npc ready: true, restart count 0
Dec 26 12:41:59.333: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 26 12:41:59.333: INFO: 	Container coredns ready: true, restart count 0
Dec 26 12:41:59.333: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 26 12:41:59.333: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 26 12:41:59.333: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 26 12:41:59.432: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1bb4fbba-27dd-11ea-948a-0242ac110005.15e3ec4f36cb517a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-4bm64/filler-pod-1bb4fbba-27dd-11ea-948a-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1bb4fbba-27dd-11ea-948a-0242ac110005.15e3ec5083e2fe8b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1bb4fbba-27dd-11ea-948a-0242ac110005.15e3ec5122ea4c71], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1bb4fbba-27dd-11ea-948a-0242ac110005.15e3ec51833eb15e], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e3ec5206a67ec0], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:42:12.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-4bm64" for this suite.
Dec 26 12:42:18.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:42:19.045: INFO: namespace: e2e-tests-sched-pred-4bm64, resource: bindings, ignored listing per whitelist
Dec 26 12:42:19.118: INFO: namespace e2e-tests-sched-pred-4bm64 deletion completed in 6.348985701s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.960 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:42:19.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-281f615c-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 12:42:20.352: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-v9vgc" to be "success or failure"
Dec 26 12:42:20.380: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.061801ms
Dec 26 12:42:22.394: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041468777s
Dec 26 12:42:24.408: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055034798s
Dec 26 12:42:26.522: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169180188s
Dec 26 12:42:28.550: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197457701s
Dec 26 12:42:30.589: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236110573s
STEP: Saw pod success
Dec 26 12:42:30.589: INFO: Pod "pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:42:30.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 26 12:42:30.814: INFO: Waiting for pod pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:42:30.837: INFO: Pod pod-projected-configmaps-2820d74b-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:42:30.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v9vgc" for this suite.
Dec 26 12:42:37.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:42:37.466: INFO: namespace: e2e-tests-projected-v9vgc, resource: bindings, ignored listing per whitelist
Dec 26 12:42:37.484: INFO: namespace e2e-tests-projected-v9vgc deletion completed in 6.610112999s

• [SLOW TEST:18.364 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:42:37.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:42:47.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-sgf7j" for this suite.
Dec 26 12:42:55.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:42:56.035: INFO: namespace: e2e-tests-emptydir-wrapper-sgf7j, resource: bindings, ignored listing per whitelist
Dec 26 12:42:56.093: INFO: namespace e2e-tests-emptydir-wrapper-sgf7j deletion completed in 8.180997297s

• [SLOW TEST:18.608 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:42:56.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-3d9cef3a-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 12:42:56.348: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-l8knz" to be "success or failure"
Dec 26 12:42:56.352: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.800479ms
Dec 26 12:42:58.766: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418171827s
Dec 26 12:43:00.792: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44415345s
Dec 26 12:43:02.808: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460144552s
Dec 26 12:43:04.878: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530198685s
Dec 26 12:43:06.942: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.594343111s
STEP: Saw pod success
Dec 26 12:43:06.942: INFO: Pod "pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:43:06.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 26 12:43:07.141: INFO: Waiting for pod pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:43:07.148: INFO: Pod pod-projected-configmaps-3d9e9443-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:43:07.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l8knz" for this suite.
Dec 26 12:43:15.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:43:15.249: INFO: namespace: e2e-tests-projected-l8knz, resource: bindings, ignored listing per whitelist
Dec 26 12:43:15.372: INFO: namespace e2e-tests-projected-l8knz deletion completed in 8.214247681s

• [SLOW TEST:19.279 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:43:15.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-hbwzw/secret-test-492927c8-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:43:15.736: INFO: Waiting up to 5m0s for pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-hbwzw" to be "success or failure"
Dec 26 12:43:15.745: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499013ms
Dec 26 12:43:17.764: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02782679s
Dec 26 12:43:19.817: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081024004s
Dec 26 12:43:21.994: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258053745s
Dec 26 12:43:24.017: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280946544s
Dec 26 12:43:26.029: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292754228s
STEP: Saw pod success
Dec 26 12:43:26.029: INFO: Pod "pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:43:26.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005 container env-test: 
STEP: delete the pod
Dec 26 12:43:26.192: INFO: Waiting for pod pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:43:26.223: INFO: Pod pod-configmaps-492a660e-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:43:26.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hbwzw" for this suite.
Dec 26 12:43:32.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:43:32.516: INFO: namespace: e2e-tests-secrets-hbwzw, resource: bindings, ignored listing per whitelist
Dec 26 12:43:32.592: INFO: namespace e2e-tests-secrets-hbwzw deletion completed in 6.359448708s

• [SLOW TEST:17.219 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:43:32.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-534cf865-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:43:32.809: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-7vt9g" to be "success or failure"
Dec 26 12:43:32.832: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.395613ms
Dec 26 12:43:34.877: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068102741s
Dec 26 12:43:36.902: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093526895s
Dec 26 12:43:39.008: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198775064s
Dec 26 12:43:41.023: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214192442s
Dec 26 12:43:43.042: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232843411s
STEP: Saw pod success
Dec 26 12:43:43.042: INFO: Pod "pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:43:43.047: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 26 12:43:43.920: INFO: Waiting for pod pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:43:43.945: INFO: Pod pod-projected-secrets-535829f0-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:43:43.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7vt9g" for this suite.
Dec 26 12:43:50.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:43:50.395: INFO: namespace: e2e-tests-projected-7vt9g, resource: bindings, ignored listing per whitelist
Dec 26 12:43:50.719: INFO: namespace e2e-tests-projected-7vt9g deletion completed in 6.615093533s

• [SLOW TEST:18.127 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:43:50.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 26 12:44:01.596: INFO: Successfully updated pod "labelsupdate5e29433a-27dd-11ea-948a-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:44:03.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qr9qd" for this suite.
Dec 26 12:44:27.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:44:27.989: INFO: namespace: e2e-tests-downward-api-qr9qd, resource: bindings, ignored listing per whitelist
Dec 26 12:44:28.006: INFO: namespace e2e-tests-downward-api-qr9qd deletion completed in 24.245345355s

• [SLOW TEST:37.286 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:44:28.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 26 12:44:52.452: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:44:52.473: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:44:54.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:44:54.509: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:44:56.474: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:44:56.497: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:44:58.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:44:58.871: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:45:00.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:45:00.492: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:45:02.474: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:45:02.498: INFO: Pod pod-with-prestop-http-hook still exists
Dec 26 12:45:04.474: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 26 12:45:04.504: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:45:04.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5n7mk" for this suite.
Dec 26 12:45:28.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:45:28.732: INFO: namespace: e2e-tests-container-lifecycle-hook-5n7mk, resource: bindings, ignored listing per whitelist
Dec 26 12:45:28.845: INFO: namespace e2e-tests-container-lifecycle-hook-5n7mk deletion completed in 24.248915039s

• [SLOW TEST:60.838 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:45:28.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 26 12:45:29.090: INFO: Waiting up to 5m0s for pod "pod-98aa4054-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-cbwm2" to be "success or failure"
Dec 26 12:45:29.101: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.305613ms
Dec 26 12:45:31.239: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148891586s
Dec 26 12:45:33.258: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168098119s
Dec 26 12:45:35.806: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.71565281s
Dec 26 12:45:37.867: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776652728s
Dec 26 12:45:39.889: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798718142s
STEP: Saw pod success
Dec 26 12:45:39.889: INFO: Pod "pod-98aa4054-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:45:39.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-98aa4054-27dd-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 12:45:40.192: INFO: Waiting for pod pod-98aa4054-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:45:40.207: INFO: Pod pod-98aa4054-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:45:40.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cbwm2" for this suite.
Dec 26 12:45:46.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:45:46.908: INFO: namespace: e2e-tests-emptydir-cbwm2, resource: bindings, ignored listing per whitelist
Dec 26 12:45:46.974: INFO: namespace e2e-tests-emptydir-cbwm2 deletion completed in 6.755541756s

• [SLOW TEST:18.129 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:45:46.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a3788e52-27dd-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 12:45:47.301: INFO: Waiting up to 5m0s for pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-6mjdk" to be "success or failure"
Dec 26 12:45:47.320: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.039687ms
Dec 26 12:45:49.334: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033024047s
Dec 26 12:45:51.350: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04908894s
Dec 26 12:45:53.370: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068227671s
Dec 26 12:45:55.479: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177991328s
Dec 26 12:45:57.621: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319163391s
Dec 26 12:45:59.968: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.666895591s
STEP: Saw pod success
Dec 26 12:45:59.968: INFO: Pod "pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:45:59.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 26 12:46:00.317: INFO: Waiting for pod pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:46:00.332: INFO: Pod pod-configmaps-a37b36c5-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:46:00.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6mjdk" for this suite.
Dec 26 12:46:06.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:46:06.639: INFO: namespace: e2e-tests-configmap-6mjdk, resource: bindings, ignored listing per whitelist
Dec 26 12:46:06.652: INFO: namespace e2e-tests-configmap-6mjdk deletion completed in 6.298929618s

• [SLOW TEST:19.677 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:46:06.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 26 12:46:06.888: INFO: Waiting up to 5m0s for pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005" in namespace "e2e-tests-var-expansion-kxxps" to be "success or failure"
Dec 26 12:46:06.907: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.080534ms
Dec 26 12:46:08.923: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03467022s
Dec 26 12:46:10.989: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101127081s
Dec 26 12:46:13.432: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543700442s
Dec 26 12:46:15.461: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.572610422s
Dec 26 12:46:17.476: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.588247175s
STEP: Saw pod success
Dec 26 12:46:17.477: INFO: Pod "var-expansion-af2efefe-27dd-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:46:17.484: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-af2efefe-27dd-11ea-948a-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 26 12:46:18.036: INFO: Waiting for pod var-expansion-af2efefe-27dd-11ea-948a-0242ac110005 to disappear
Dec 26 12:46:18.221: INFO: Pod var-expansion-af2efefe-27dd-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:46:18.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-kxxps" for this suite.
Dec 26 12:46:24.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:46:24.423: INFO: namespace: e2e-tests-var-expansion-kxxps, resource: bindings, ignored listing per whitelist
Dec 26 12:46:24.656: INFO: namespace e2e-tests-var-expansion-kxxps deletion completed in 6.422393828s

• [SLOW TEST:18.003 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:46:24.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 26 12:46:24.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 26 12:46:26.583: INFO: stderr: ""
Dec 26 12:46:26.583: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:46:26.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hfnkn" for this suite.
Dec 26 12:46:32.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:46:33.059: INFO: namespace: e2e-tests-kubectl-hfnkn, resource: bindings, ignored listing per whitelist
Dec 26 12:46:33.094: INFO: namespace e2e-tests-kubectl-hfnkn deletion completed in 6.317141834s

• [SLOW TEST:8.438 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:46:33.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 26 12:46:33.297: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128049,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 26 12:46:33.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128050,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 26 12:46:33.297: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128051,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 26 12:46:43.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128065,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 26 12:46:43.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128066,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 26 12:46:43.381: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nfw4f,SelfLink:/api/v1/namespaces/e2e-tests-watch-nfw4f/configmaps/e2e-watch-test-label-changed,UID:beed42bc-27dd-11ea-a994-fa163e34d433,ResourceVersion:16128067,Generation:0,CreationTimestamp:2019-12-26 12:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:46:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-nfw4f" for this suite.
Dec 26 12:46:49.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:46:49.671: INFO: namespace: e2e-tests-watch-nfw4f, resource: bindings, ignored listing per whitelist
Dec 26 12:46:49.684: INFO: namespace e2e-tests-watch-nfw4f deletion completed in 6.293229954s

• [SLOW TEST:16.589 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:46:49.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tr7ps
Dec 26 12:47:00.109: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tr7ps
STEP: checking the pod's current state and verifying that restartCount is present
Dec 26 12:47:00.115: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:51:00.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tr7ps" for this suite.
Dec 26 12:51:07.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:51:07.085: INFO: namespace: e2e-tests-container-probe-tr7ps, resource: bindings, ignored listing per whitelist
Dec 26 12:51:07.115: INFO: namespace e2e-tests-container-probe-tr7ps deletion completed in 6.261711731s

• [SLOW TEST:257.431 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:51:07.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6249eea6-27de-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:51:07.419: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-xmhp5" to be "success or failure"
Dec 26 12:51:07.436: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.644929ms
Dec 26 12:51:09.648: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228819611s
Dec 26 12:51:11.682: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26355147s
Dec 26 12:51:13.784: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365136175s
Dec 26 12:51:15.805: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386217546s
Dec 26 12:51:18.251: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.831734639s
STEP: Saw pod success
Dec 26 12:51:18.251: INFO: Pod "pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:51:18.550: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 26 12:51:19.029: INFO: Waiting for pod pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005 to disappear
Dec 26 12:51:19.067: INFO: Pod pod-projected-secrets-6251782f-27de-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:51:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xmhp5" for this suite.
Dec 26 12:51:27.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:51:27.311: INFO: namespace: e2e-tests-projected-xmhp5, resource: bindings, ignored listing per whitelist
Dec 26 12:51:27.439: INFO: namespace e2e-tests-projected-xmhp5 deletion completed in 8.24864712s

• [SLOW TEST:20.323 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:51:27.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 12:51:27.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-z7bj9'
Dec 26 12:51:27.769: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 26 12:51:27.769: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 26 12:51:30.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-z7bj9'
Dec 26 12:51:31.052: INFO: stderr: ""
Dec 26 12:51:31.052: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:51:31.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z7bj9" for this suite.
Dec 26 12:51:53.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:51:53.443: INFO: namespace: e2e-tests-kubectl-z7bj9, resource: bindings, ignored listing per whitelist
Dec 26 12:51:53.463: INFO: namespace e2e-tests-kubectl-z7bj9 deletion completed in 22.382864819s

• [SLOW TEST:26.024 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:51:53.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:51:53.596: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 26 12:51:53.747: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 26 12:51:58.809: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 26 12:52:06.843: INFO: Creating deployment "test-rolling-update-deployment"
Dec 26 12:52:06.872: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 26 12:52:06.905: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 26 12:52:09.516: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 26 12:52:09.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:52:11.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:52:13.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:52:15.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712961527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 26 12:52:17.608: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 26 12:52:17.646: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-ztdhh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ztdhh/deployments/test-rolling-update-deployment,UID:85c224dd-27de-11ea-a994-fa163e34d433,ResourceVersion:16128587,Generation:1,CreationTimestamp:2019-12-26 12:52:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-26 12:52:07 +0000 UTC 2019-12-26 12:52:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-26 12:52:16 +0000 UTC 2019-12-26 12:52:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 26 12:52:17.651: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-ztdhh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ztdhh/replicasets/test-rolling-update-deployment-75db98fb4c,UID:85cfc183-27de-11ea-a994-fa163e34d433,ResourceVersion:16128578,Generation:1,CreationTimestamp:2019-12-26 12:52:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 85c224dd-27de-11ea-a994-fa163e34d433 0xc0014a6437 0xc0014a6438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 26 12:52:17.652: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 26 12:52:17.652: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-ztdhh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ztdhh/replicasets/test-rolling-update-controller,UID:7ddc5866-27de-11ea-a994-fa163e34d433,ResourceVersion:16128586,Generation:2,CreationTimestamp:2019-12-26 12:51:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 85c224dd-27de-11ea-a994-fa163e34d433 0xc0014a6347 0xc0014a6348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 12:52:17.659: INFO: Pod "test-rolling-update-deployment-75db98fb4c-s7cfh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-s7cfh,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-ztdhh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ztdhh/pods/test-rolling-update-deployment-75db98fb4c-s7cfh,UID:85df6c5d-27de-11ea-a994-fa163e34d433,ResourceVersion:16128577,Generation:0,CreationTimestamp:2019-12-26 12:52:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 85cfc183-27de-11ea-a994-fa163e34d433 0xc000b32567 0xc000b32568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-md826 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-md826,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-md826 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000b325d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000b325f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:52:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:52:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:52:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 12:52:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-26 12:52:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-26 12:52:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://24310f649b9d9bbdea349a31f6361b0c06dce4d8642f58edd47d00530b976301}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:52:17.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ztdhh" for this suite.
Dec 26 12:52:26.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:52:27.538: INFO: namespace: e2e-tests-deployment-ztdhh, resource: bindings, ignored listing per whitelist
Dec 26 12:52:27.556: INFO: namespace e2e-tests-deployment-ztdhh deletion completed in 9.881935278s

• [SLOW TEST:34.093 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:52:27.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-t7qrl
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-t7qrl
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-t7qrl
Dec 26 12:52:27.807: INFO: Found 0 stateful pods, waiting for 1
Dec 26 12:52:37.835: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 26 12:52:37.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 12:52:38.645: INFO: stderr: ""
Dec 26 12:52:38.645: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 12:52:38.645: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 12:52:38.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 12:52:38.831: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 12:52:38.864: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999811s
Dec 26 12:52:40.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987492084s
Dec 26 12:52:41.051: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.821019832s
Dec 26 12:52:42.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.801105503s
Dec 26 12:52:43.079: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.785322972s
Dec 26 12:52:44.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.772961216s
Dec 26 12:52:45.114: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.756627728s
Dec 26 12:52:46.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.737437206s
Dec 26 12:52:47.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.72196151s
Dec 26 12:52:48.155: INFO: Verifying statefulset ss doesn't scale past 1 for another 710.420534ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-t7qrl
Dec 26 12:52:49.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 12:52:50.024: INFO: stderr: ""
Dec 26 12:52:50.024: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 12:52:50.024: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 12:52:50.073: INFO: Found 1 stateful pods, waiting for 3
Dec 26 12:53:00.158: INFO: Found 2 stateful pods, waiting for 3
Dec 26 12:53:10.107: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 12:53:10.107: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 12:53:10.107: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 26 12:53:20.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 12:53:20.087: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 12:53:20.087: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 26 12:53:20.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 12:53:21.024: INFO: stderr: ""
Dec 26 12:53:21.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 12:53:21.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 12:53:21.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 12:53:21.448: INFO: stderr: ""
Dec 26 12:53:21.448: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 12:53:21.448: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 12:53:21.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 12:53:22.198: INFO: stderr: ""
Dec 26 12:53:22.198: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 12:53:22.198: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 12:53:22.198: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 12:53:22.260: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 26 12:53:32.311: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 12:53:32.311: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 12:53:32.311: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 12:53:32.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999818s
Dec 26 12:53:33.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990642328s
Dec 26 12:53:34.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.892538902s
Dec 26 12:53:35.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.868196971s
Dec 26 12:53:36.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.847777723s
Dec 26 12:53:37.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.793652378s
Dec 26 12:53:38.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.774423747s
Dec 26 12:53:39.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.723726785s
Dec 26 12:53:40.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.499795612s
Dec 26 12:53:41.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 457.584698ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-t7qrl
Dec 26 12:53:42.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 12:53:43.630: INFO: stderr: ""
Dec 26 12:53:43.631: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 12:53:43.631: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 12:53:43.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 12:53:44.359: INFO: stderr: ""
Dec 26 12:53:44.359: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 12:53:44.359: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 12:53:44.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t7qrl ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 12:53:44.860: INFO: stderr: ""
Dec 26 12:53:44.860: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 12:53:44.861: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 12:53:44.861: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 26 12:54:15.064: INFO: Deleting all statefulset in ns e2e-tests-statefulset-t7qrl
Dec 26 12:54:15.071: INFO: Scaling statefulset ss to 0
Dec 26 12:54:15.078: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 12:54:15.080: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:54:15.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-t7qrl" for this suite.
Dec 26 12:54:23.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:54:23.391: INFO: namespace: e2e-tests-statefulset-t7qrl, resource: bindings, ignored listing per whitelist
Dec 26 12:54:23.438: INFO: namespace e2e-tests-statefulset-t7qrl deletion completed in 8.237063142s

• [SLOW TEST:115.881 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:54:23.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1226 12:54:54.584471       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 12:54:54.584: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:54:54.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jmb7b" for this suite.
Dec 26 12:55:08.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:55:09.568: INFO: namespace: e2e-tests-gc-jmb7b, resource: bindings, ignored listing per whitelist
Dec 26 12:55:09.573: INFO: namespace e2e-tests-gc-jmb7b deletion completed in 14.981655053s

• [SLOW TEST:46.135 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:55:09.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 12:55:10.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-4kjhp" to be "success or failure"
Dec 26 12:55:10.918: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.064057ms
Dec 26 12:55:12.936: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134195225s
Dec 26 12:55:14.952: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149945943s
Dec 26 12:55:16.975: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173614783s
Dec 26 12:55:19.173: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371671357s
Dec 26 12:55:21.406: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604121081s
Dec 26 12:55:24.199: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.396955853s
Dec 26 12:55:26.229: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.426832951s
Dec 26 12:55:28.254: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.452454746s
STEP: Saw pod success
Dec 26 12:55:28.255: INFO: Pod "downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:55:28.272: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 12:55:28.334: INFO: Waiting for pod downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005 to disappear
Dec 26 12:55:28.547: INFO: Pod downwardapi-volume-f35e3c5b-27de-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:55:28.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4kjhp" for this suite.
Dec 26 12:55:34.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:55:34.712: INFO: namespace: e2e-tests-downward-api-4kjhp, resource: bindings, ignored listing per whitelist
Dec 26 12:55:34.869: INFO: namespace e2e-tests-downward-api-4kjhp deletion completed in 6.289884652s

• [SLOW TEST:25.294 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:55:34.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 26 12:55:35.571: INFO: Waiting up to 5m0s for pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2" in namespace "e2e-tests-svcaccounts-bf4jj" to be "success or failure"
Dec 26 12:55:35.578: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.453604ms
Dec 26 12:55:37.727: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15556984s
Dec 26 12:55:39.748: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177295878s
Dec 26 12:55:41.787: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21593265s
Dec 26 12:55:44.169: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.597660153s
Dec 26 12:55:46.211: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639796283s
Dec 26 12:55:48.223: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.651588509s
Dec 26 12:55:50.543: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.97198947s
Dec 26 12:55:52.574: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.002998468s
Dec 26 12:55:54.691: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Running", Reason="", readiness=false. Elapsed: 19.119707854s
Dec 26 12:55:56.790: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.218916171s
STEP: Saw pod success
Dec 26 12:55:56.790: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2" satisfied condition "success or failure"
Dec 26 12:55:56.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2 container token-test: 
STEP: delete the pod
Dec 26 12:55:56.885: INFO: Waiting for pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2 to disappear
Dec 26 12:55:56.936: INFO: Pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-d7nf2 no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 26 12:55:56.947: INFO: Waiting up to 5m0s for pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl" in namespace "e2e-tests-svcaccounts-bf4jj" to be "success or failure"
Dec 26 12:55:56.983: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 36.387443ms
Dec 26 12:55:59.005: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058454094s
Dec 26 12:56:01.015: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06851691s
Dec 26 12:56:03.086: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138860862s
Dec 26 12:56:05.986: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.039252829s
Dec 26 12:56:08.018: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.071123404s
Dec 26 12:56:10.825: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.877999902s
Dec 26 12:56:12.845: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.898101594s
Dec 26 12:56:14.874: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 17.927527286s
Dec 26 12:56:17.053: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Pending", Reason="", readiness=false. Elapsed: 20.10642526s
Dec 26 12:56:19.063: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.116198807s
STEP: Saw pod success
Dec 26 12:56:19.063: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl" satisfied condition "success or failure"
Dec 26 12:56:19.069: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl container root-ca-test: 
STEP: delete the pod
Dec 26 12:56:20.359: INFO: Waiting for pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl to disappear
Dec 26 12:56:20.541: INFO: Pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-r8phl no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 26 12:56:20.607: INFO: Waiting up to 5m0s for pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5" in namespace "e2e-tests-svcaccounts-bf4jj" to be "success or failure"
Dec 26 12:56:20.633: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.228307ms
Dec 26 12:56:22.647: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040086916s
Dec 26 12:56:24.665: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058255488s
Dec 26 12:56:27.048: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441580141s
Dec 26 12:56:29.066: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458938763s
Dec 26 12:56:31.391: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.784216103s
Dec 26 12:56:33.418: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.810872327s
Dec 26 12:56:35.457: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.850037407s
Dec 26 12:56:37.472: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.864850513s
Dec 26 12:56:39.500: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.892609983s
Dec 26 12:56:41.530: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.923067885s
STEP: Saw pod success
Dec 26 12:56:41.530: INFO: Pod "pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5" satisfied condition "success or failure"
Dec 26 12:56:41.547: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5 container namespace-test: 
STEP: delete the pod
Dec 26 12:56:41.738: INFO: Waiting for pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5 to disappear
Dec 26 12:56:41.949: INFO: Pod pod-service-account-0226b903-27df-11ea-948a-0242ac110005-j4pf5 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:56:41.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-bf4jj" for this suite.
Dec 26 12:56:50.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:56:50.179: INFO: namespace: e2e-tests-svcaccounts-bf4jj, resource: bindings, ignored listing per whitelist
Dec 26 12:56:50.186: INFO: namespace e2e-tests-svcaccounts-bf4jj deletion completed in 8.218209777s

• [SLOW TEST:75.317 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:56:50.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 26 12:56:50.608: INFO: Waiting up to 5m0s for pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005" in namespace "e2e-tests-var-expansion-lxtx4" to be "success or failure"
Dec 26 12:56:50.655: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.254869ms
Dec 26 12:56:52.810: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20156928s
Dec 26 12:56:54.849: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240733804s
Dec 26 12:56:57.489: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.880964121s
Dec 26 12:56:59.496: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88798024s
Dec 26 12:57:02.469: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.860604951s
STEP: Saw pod success
Dec 26 12:57:02.469: INFO: Pod "var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:57:02.490: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 26 12:57:02.934: INFO: Waiting for pod var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005 to disappear
Dec 26 12:57:03.092: INFO: Pod var-expansion-2ed2bfba-27df-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:57:03.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-lxtx4" for this suite.
Dec 26 12:57:09.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:57:09.277: INFO: namespace: e2e-tests-var-expansion-lxtx4, resource: bindings, ignored listing per whitelist
Dec 26 12:57:09.332: INFO: namespace e2e-tests-var-expansion-lxtx4 deletion completed in 6.23314946s

• [SLOW TEST:19.146 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:57:09.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3a2bf3da-27df-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:57:09.555: INFO: Waiting up to 5m0s for pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-mfvmr" to be "success or failure"
Dec 26 12:57:09.619: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.931997ms
Dec 26 12:57:11.661: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105280814s
Dec 26 12:57:13.698: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142664786s
Dec 26 12:57:16.090: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.534532149s
Dec 26 12:57:18.145: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590007502s
Dec 26 12:57:20.256: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.700604028s
Dec 26 12:57:22.633: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.077714198s
STEP: Saw pod success
Dec 26 12:57:22.633: INFO: Pod "pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:57:22.655: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 12:57:23.254: INFO: Waiting for pod pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005 to disappear
Dec 26 12:57:23.284: INFO: Pod pod-secrets-3a2cc7f0-27df-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:57:23.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mfvmr" for this suite.
Dec 26 12:57:31.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:57:31.599: INFO: namespace: e2e-tests-secrets-mfvmr, resource: bindings, ignored listing per whitelist
Dec 26 12:57:31.837: INFO: namespace e2e-tests-secrets-mfvmr deletion completed in 8.54254411s

• [SLOW TEST:22.505 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:57:31.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-7lbn4
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7lbn4 to expose endpoints map[]
Dec 26 12:57:32.214: INFO: Get endpoints failed (35.351748ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 26 12:57:33.242: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7lbn4 exposes endpoints map[] (1.06276355s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-7lbn4
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7lbn4 to expose endpoints map[pod1:[100]]
Dec 26 12:57:38.122: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.862475018s elapsed, will retry)
Dec 26 12:57:43.703: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7lbn4 exposes endpoints map[pod1:[100]] (10.444003873s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-7lbn4
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7lbn4 to expose endpoints map[pod2:[101] pod1:[100]]
Dec 26 12:57:49.314: INFO: Unexpected endpoints: found map[484f5271-27df-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.601604194s elapsed, will retry)
Dec 26 12:57:59.101: INFO: Unexpected endpoints: found map[484f5271-27df-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (15.388217642s elapsed, will retry)
Dec 26 12:58:00.126: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7lbn4 exposes endpoints map[pod1:[100] pod2:[101]] (16.413284312s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-7lbn4
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7lbn4 to expose endpoints map[pod2:[101]]
Dec 26 12:58:01.224: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7lbn4 exposes endpoints map[pod2:[101]] (1.079797686s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-7lbn4
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7lbn4 to expose endpoints map[]
Dec 26 12:58:03.840: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7lbn4 exposes endpoints map[] (2.604700688s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:58:04.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7lbn4" for this suite.
Dec 26 12:58:28.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:58:28.889: INFO: namespace: e2e-tests-services-7lbn4, resource: bindings, ignored listing per whitelist
Dec 26 12:58:29.000: INFO: namespace e2e-tests-services-7lbn4 deletion completed in 24.364773125s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:57.162 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:58:29.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 12:58:29.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:58:41.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-22vfz" for this suite.
Dec 26 12:59:25.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:59:25.474: INFO: namespace: e2e-tests-pods-22vfz, resource: bindings, ignored listing per whitelist
Dec 26 12:59:25.611: INFO: namespace e2e-tests-pods-22vfz deletion completed in 44.258420268s

• [SLOW TEST:56.611 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:59:25.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:59:26.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mc8nz" for this suite.
Dec 26 12:59:42.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 12:59:42.515: INFO: namespace: e2e-tests-pods-mc8nz, resource: bindings, ignored listing per whitelist
Dec 26 12:59:42.646: INFO: namespace e2e-tests-pods-mc8nz deletion completed in 16.354509809s

• [SLOW TEST:17.035 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 12:59:42.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-95a12f63-27df-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 12:59:43.003: INFO: Waiting up to 5m0s for pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-4t2x4" to be "success or failure"
Dec 26 12:59:43.024: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.011237ms
Dec 26 12:59:45.042: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039209781s
Dec 26 12:59:47.075: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071718967s
Dec 26 12:59:49.217: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213558676s
Dec 26 12:59:51.610: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606609054s
Dec 26 12:59:53.642: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.638429818s
STEP: Saw pod success
Dec 26 12:59:53.642: INFO: Pod "pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 12:59:53.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 12:59:53.905: INFO: Waiting for pod pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005 to disappear
Dec 26 12:59:53.920: INFO: Pod pod-secrets-95a2ac33-27df-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 12:59:53.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4t2x4" for this suite.
Dec 26 13:00:00.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:00:00.342: INFO: namespace: e2e-tests-secrets-4t2x4, resource: bindings, ignored listing per whitelist
Dec 26 13:00:00.342: INFO: namespace e2e-tests-secrets-4t2x4 deletion completed in 6.405955145s

• [SLOW TEST:17.696 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:00:00.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w6d7b
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 26 13:00:01.092: INFO: Found 0 stateful pods, waiting for 3
Dec 26 13:00:11.287: INFO: Found 1 stateful pods, waiting for 3
Dec 26 13:00:21.108: INFO: Found 2 stateful pods, waiting for 3
Dec 26 13:00:31.111: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 13:00:31.111: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 13:00:31.111: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 26 13:00:41.173: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 13:00:41.173: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 13:00:41.173: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 13:00:41.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6d7b ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 13:00:41.979: INFO: stderr: ""
Dec 26 13:00:41.979: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 13:00:41.979: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 26 13:00:52.051: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 26 13:01:02.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6d7b ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 13:01:03.058: INFO: stderr: ""
Dec 26 13:01:03.058: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 13:01:03.058: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 13:01:13.143: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:01:13.143: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:13.143: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:13.143: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:23.251: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:01:23.251: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:23.251: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:33.263: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:01:33.264: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:33.264: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:43.250: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:01:43.250: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:01:53.215: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:01:53.215: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:02:03.161: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:02:03.161: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 26 13:02:13.205: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:02:23.422: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 26 13:02:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6d7b ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 13:02:34.027: INFO: stderr: ""
Dec 26 13:02:34.028: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 13:02:34.028: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 13:02:44.231: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 26 13:02:56.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6d7b ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 13:02:56.973: INFO: stderr: ""
Dec 26 13:02:56.973: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 13:02:56.973: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 13:03:07.056: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:03:07.056: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:07.056: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:07.056: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:17.199: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:03:17.199: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:17.199: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:27.092: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:03:27.092: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:27.092: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:37.081: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:03:37.081: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:47.074: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
Dec 26 13:03:47.074: INFO: Waiting for Pod e2e-tests-statefulset-w6d7b/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 26 13:03:57.078: INFO: Waiting for StatefulSet e2e-tests-statefulset-w6d7b/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 26 13:04:07.075: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w6d7b
Dec 26 13:04:07.078: INFO: Scaling statefulset ss2 to 0
Dec 26 13:04:37.149: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 13:04:37.163: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:04:37.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w6d7b" for this suite.
Dec 26 13:04:49.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:04:49.595: INFO: namespace: e2e-tests-statefulset-w6d7b, resource: bindings, ignored listing per whitelist
Dec 26 13:04:49.649: INFO: namespace e2e-tests-statefulset-w6d7b deletion completed in 12.4458702s

• [SLOW TEST:289.306 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:04:49.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 13:04:50.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:04:54.217: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 26 13:04:54.218: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 26 13:04:54.438: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 26 13:04:54.687: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 26 13:04:54.819: INFO: scanned /root for discovery docs: 
Dec 26 13:04:54.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:28.412: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 26 13:05:28.412: INFO: stdout: "Created e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189\nScaling up e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 26 13:05:28.412: INFO: stdout: "Created e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189\nScaling up e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 26 13:05:28.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:28.788: INFO: stderr: ""
Dec 26 13:05:28.788: INFO: stdout: "e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189-p6z5n e2e-test-nginx-rc-tjtsn "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 26 13:05:33.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:33.980: INFO: stderr: ""
Dec 26 13:05:33.980: INFO: stdout: "e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189-p6z5n "
Dec 26 13:05:33.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189-p6z5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:34.115: INFO: stderr: ""
Dec 26 13:05:34.115: INFO: stdout: "true"
Dec 26 13:05:34.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189-p6z5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:34.229: INFO: stderr: ""
Dec 26 13:05:34.229: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 26 13:05:34.230: INFO: e2e-test-nginx-rc-c7c8ab5ebeb2754392e25d52b2e6c189-p6z5n is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 26 13:05:34.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qll9c'
Dec 26 13:05:34.366: INFO: stderr: ""
Dec 26 13:05:34.366: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:05:34.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qll9c" for this suite.
Dec 26 13:05:58.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:05:58.765: INFO: namespace: e2e-tests-kubectl-qll9c, resource: bindings, ignored listing per whitelist
Dec 26 13:05:58.802: INFO: namespace e2e-tests-kubectl-qll9c deletion completed in 24.411320938s

• [SLOW TEST:69.153 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:05:58.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 26 13:05:59.087: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:06:26.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-99m9j" for this suite.
Dec 26 13:06:50.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:06:50.996: INFO: namespace: e2e-tests-init-container-99m9j, resource: bindings, ignored listing per whitelist
Dec 26 13:06:51.111: INFO: namespace e2e-tests-init-container-99m9j deletion completed in 24.214803152s

• [SLOW TEST:52.308 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:06:51.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 26 13:06:51.292: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 26 13:06:51.298: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5nxq8/daemonsets","resourceVersion":"16130615"},"items":null}

Dec 26 13:06:51.300: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5nxq8/pods","resourceVersion":"16130615"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:06:51.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5nxq8" for this suite.
Dec 26 13:06:57.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:06:57.530: INFO: namespace: e2e-tests-daemonsets-5nxq8, resource: bindings, ignored listing per whitelist
Dec 26 13:06:57.583: INFO: namespace e2e-tests-daemonsets-5nxq8 deletion completed in 6.274141728s

S [SKIPPING] [6.473 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 26 13:06:51.292: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:06:57.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:07:09.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7s9sc" for this suite.
Dec 26 13:08:03.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:08:04.089: INFO: namespace: e2e-tests-kubelet-test-7s9sc, resource: bindings, ignored listing per whitelist
Dec 26 13:08:04.147: INFO: namespace e2e-tests-kubelet-test-7s9sc deletion completed in 54.202884367s

• [SLOW TEST:66.563 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:08:04.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c068a4d3-27e0-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 13:08:04.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-v2rdb" to be "success or failure"
Dec 26 13:08:04.369: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.884111ms
Dec 26 13:08:06.532: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18326503s
Dec 26 13:08:08.549: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200563938s
Dec 26 13:08:10.654: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305400797s
Dec 26 13:08:12.727: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378210577s
Dec 26 13:08:14.750: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.401387854s
STEP: Saw pod success
Dec 26 13:08:14.750: INFO: Pod "pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:08:14.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 26 13:08:15.104: INFO: Waiting for pod pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005 to disappear
Dec 26 13:08:15.119: INFO: Pod pod-configmaps-c0692233-27e0-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:08:15.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-v2rdb" for this suite.
Dec 26 13:08:23.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:08:23.250: INFO: namespace: e2e-tests-configmap-v2rdb, resource: bindings, ignored listing per whitelist
Dec 26 13:08:23.352: INFO: namespace e2e-tests-configmap-v2rdb deletion completed in 8.216118511s

• [SLOW TEST:19.205 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:08:23.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 26 13:08:24.944: INFO: Waiting up to 5m0s for pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005" in namespace "e2e-tests-containers-6pn7p" to be "success or failure"
Dec 26 13:08:25.018: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.27089ms
Dec 26 13:08:27.063: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11829976s
Dec 26 13:08:29.087: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141944052s
Dec 26 13:08:31.634: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689028225s
Dec 26 13:08:34.082: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.136944723s
Dec 26 13:08:36.094: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.149591911s
Dec 26 13:08:38.105: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.160371154s
STEP: Saw pod success
Dec 26 13:08:38.105: INFO: Pod "client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:08:38.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 13:08:38.290: INFO: Waiting for pod client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005 to disappear
Dec 26 13:08:38.311: INFO: Pod client-containers-cc55a6e8-27e0-11ea-948a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:08:38.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-6pn7p" for this suite.
Dec 26 13:08:44.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:08:44.657: INFO: namespace: e2e-tests-containers-6pn7p, resource: bindings, ignored listing per whitelist
Dec 26 13:08:44.676: INFO: namespace e2e-tests-containers-6pn7p deletion completed in 6.304190399s

• [SLOW TEST:21.323 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:08:44.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 26 13:08:44.936: INFO: Waiting up to 5m0s for pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-fhq7n" to be "success or failure"
Dec 26 13:08:45.002: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.736147ms
Dec 26 13:08:47.026: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08969891s
Dec 26 13:08:49.034: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098374956s
Dec 26 13:08:51.080: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144284403s
Dec 26 13:08:53.098: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161756519s
Dec 26 13:08:55.111: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175380192s
STEP: Saw pod success
Dec 26 13:08:55.111: INFO: Pod "downward-api-d8a601f9-27e0-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:08:55.115: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d8a601f9-27e0-11ea-948a-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 26 13:08:56.115: INFO: Waiting for pod downward-api-d8a601f9-27e0-11ea-948a-0242ac110005 to disappear
Dec 26 13:08:56.318: INFO: Pod downward-api-d8a601f9-27e0-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:08:56.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fhq7n" for this suite.
Dec 26 13:09:04.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:09:04.503: INFO: namespace: e2e-tests-downward-api-fhq7n, resource: bindings, ignored listing per whitelist
Dec 26 13:09:04.612: INFO: namespace e2e-tests-downward-api-fhq7n deletion completed in 8.268871034s

• [SLOW TEST:19.935 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:09:04.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wc992
Dec 26 13:09:17.255: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wc992
STEP: checking the pod's current state and verifying that restartCount is present
Dec 26 13:09:17.261: INFO: Initial restart count of pod liveness-http is 0
Dec 26 13:09:39.838: INFO: Restart count of pod e2e-tests-container-probe-wc992/liveness-http is now 1 (22.576689182s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:09:39.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wc992" for this suite.
Dec 26 13:09:46.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:09:46.262: INFO: namespace: e2e-tests-container-probe-wc992, resource: bindings, ignored listing per whitelist
Dec 26 13:09:46.273: INFO: namespace e2e-tests-container-probe-wc992 deletion completed in 6.260050917s

• [SLOW TEST:41.661 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:09:46.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-fd7139a6-27e0-11ea-948a-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 26 13:09:46.671: INFO: Waiting up to 5m0s for pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005" in namespace "e2e-tests-secrets-99pfr" to be "success or failure"
Dec 26 13:09:46.706: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.11959ms
Dec 26 13:09:48.743: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071868544s
Dec 26 13:09:50.789: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117218231s
Dec 26 13:09:53.474: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802918146s
Dec 26 13:09:56.179: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.507534817s
Dec 26 13:09:58.806: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.13437014s
Dec 26 13:10:00.818: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.14642253s
STEP: Saw pod success
Dec 26 13:10:00.818: INFO: Pod "pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:10:00.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 26 13:10:01.989: INFO: Waiting for pod pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005 to disappear
Dec 26 13:10:02.268: INFO: Pod pod-secrets-fd720d58-27e0-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:10:02.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-99pfr" for this suite.
Dec 26 13:10:08.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:10:08.842: INFO: namespace: e2e-tests-secrets-99pfr, resource: bindings, ignored listing per whitelist
Dec 26 13:10:08.891: INFO: namespace e2e-tests-secrets-99pfr deletion completed in 6.607125738s

• [SLOW TEST:22.618 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:10:08.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-wbhtr
I1226 13:10:09.515848       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-wbhtr, replica count: 1
I1226 13:10:10.567039       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:11.567462       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:12.568334       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:13.568790       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:14.569139       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:15.569722       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:16.570340       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:17.571427       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:18.572180       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:19.572702       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:20.573455       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 13:10:21.573903       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 26 13:10:21.900: INFO: Created: latency-svc-762f2
Dec 26 13:10:22.063: INFO: Got endpoints: latency-svc-762f2 [389.268773ms]
Dec 26 13:10:22.312: INFO: Created: latency-svc-s4zxp
Dec 26 13:10:22.332: INFO: Got endpoints: latency-svc-s4zxp [265.405926ms]
Dec 26 13:10:22.335: INFO: Created: latency-svc-26zwb
Dec 26 13:10:22.481: INFO: Got endpoints: latency-svc-26zwb [413.907626ms]
Dec 26 13:10:22.613: INFO: Created: latency-svc-zn76r
Dec 26 13:10:22.658: INFO: Got endpoints: latency-svc-zn76r [592.092178ms]
Dec 26 13:10:22.912: INFO: Created: latency-svc-q4pb6
Dec 26 13:10:22.963: INFO: Created: latency-svc-tr6xn
Dec 26 13:10:22.963: INFO: Got endpoints: latency-svc-q4pb6 [896.035353ms]
Dec 26 13:10:23.094: INFO: Got endpoints: latency-svc-tr6xn [1.026981041s]
Dec 26 13:10:23.112: INFO: Created: latency-svc-lprq8
Dec 26 13:10:23.126: INFO: Got endpoints: latency-svc-lprq8 [1.05997602s]
Dec 26 13:10:23.315: INFO: Created: latency-svc-zbct5
Dec 26 13:10:23.328: INFO: Got endpoints: latency-svc-zbct5 [1.261642105s]
Dec 26 13:10:23.365: INFO: Created: latency-svc-rmgx6
Dec 26 13:10:23.380: INFO: Got endpoints: latency-svc-rmgx6 [1.314728599s]
Dec 26 13:10:23.576: INFO: Created: latency-svc-25lsq
Dec 26 13:10:23.587: INFO: Got endpoints: latency-svc-25lsq [1.519881756s]
Dec 26 13:10:23.638: INFO: Created: latency-svc-qdscb
Dec 26 13:10:23.663: INFO: Got endpoints: latency-svc-qdscb [1.59578574s]
Dec 26 13:10:23.819: INFO: Created: latency-svc-n566g
Dec 26 13:10:24.007: INFO: Got endpoints: latency-svc-n566g [1.939448696s]
Dec 26 13:10:24.036: INFO: Created: latency-svc-6njcg
Dec 26 13:10:24.243: INFO: Got endpoints: latency-svc-6njcg [2.176207678s]
Dec 26 13:10:24.284: INFO: Created: latency-svc-x4bzb
Dec 26 13:10:24.299: INFO: Got endpoints: latency-svc-x4bzb [2.231389797s]
Dec 26 13:10:24.440: INFO: Created: latency-svc-vxglh
Dec 26 13:10:24.461: INFO: Got endpoints: latency-svc-vxglh [2.396445678s]
Dec 26 13:10:24.520: INFO: Created: latency-svc-sq6gx
Dec 26 13:10:24.719: INFO: Got endpoints: latency-svc-sq6gx [2.654749185s]
Dec 26 13:10:24.776: INFO: Created: latency-svc-nl7lk
Dec 26 13:10:24.814: INFO: Created: latency-svc-2tl65
Dec 26 13:10:24.814: INFO: Got endpoints: latency-svc-nl7lk [2.48237769s]
Dec 26 13:10:24.989: INFO: Got endpoints: latency-svc-2tl65 [2.508103142s]
Dec 26 13:10:25.010: INFO: Created: latency-svc-v9g75
Dec 26 13:10:25.019: INFO: Got endpoints: latency-svc-v9g75 [2.361145067s]
Dec 26 13:10:25.068: INFO: Created: latency-svc-gxvfc
Dec 26 13:10:25.217: INFO: Got endpoints: latency-svc-gxvfc [2.253416725s]
Dec 26 13:10:25.280: INFO: Created: latency-svc-djntq
Dec 26 13:10:25.482: INFO: Got endpoints: latency-svc-djntq [2.388051203s]
Dec 26 13:10:25.748: INFO: Created: latency-svc-wj5nv
Dec 26 13:10:25.864: INFO: Got endpoints: latency-svc-wj5nv [2.737536197s]
Dec 26 13:10:25.948: INFO: Created: latency-svc-m5vh4
Dec 26 13:10:26.126: INFO: Got endpoints: latency-svc-m5vh4 [2.798552143s]
Dec 26 13:10:26.296: INFO: Created: latency-svc-xx5k9
Dec 26 13:10:26.372: INFO: Got endpoints: latency-svc-xx5k9 [507.504817ms]
Dec 26 13:10:26.389: INFO: Created: latency-svc-cfvgb
Dec 26 13:10:26.517: INFO: Got endpoints: latency-svc-cfvgb [3.136458497s]
Dec 26 13:10:26.551: INFO: Created: latency-svc-6bslm
Dec 26 13:10:26.605: INFO: Got endpoints: latency-svc-6bslm [3.018292725s]
Dec 26 13:10:26.803: INFO: Created: latency-svc-825dk
Dec 26 13:10:26.842: INFO: Got endpoints: latency-svc-825dk [3.178826819s]
Dec 26 13:10:27.017: INFO: Created: latency-svc-5gt25
Dec 26 13:10:27.029: INFO: Got endpoints: latency-svc-5gt25 [3.022177781s]
Dec 26 13:10:27.088: INFO: Created: latency-svc-ldnjl
Dec 26 13:10:27.249: INFO: Got endpoints: latency-svc-ldnjl [3.00567663s]
Dec 26 13:10:27.271: INFO: Created: latency-svc-wgkng
Dec 26 13:10:27.296: INFO: Got endpoints: latency-svc-wgkng [2.99750769s]
Dec 26 13:10:27.357: INFO: Created: latency-svc-vtl6p
Dec 26 13:10:27.553: INFO: Got endpoints: latency-svc-vtl6p [3.092204381s]
Dec 26 13:10:27.591: INFO: Created: latency-svc-8zn5f
Dec 26 13:10:27.607: INFO: Got endpoints: latency-svc-8zn5f [2.887856234s]
Dec 26 13:10:27.828: INFO: Created: latency-svc-tx67z
Dec 26 13:10:27.858: INFO: Got endpoints: latency-svc-tx67z [3.043214277s]
Dec 26 13:10:28.038: INFO: Created: latency-svc-blxzl
Dec 26 13:10:28.072: INFO: Got endpoints: latency-svc-blxzl [3.082608245s]
Dec 26 13:10:28.287: INFO: Created: latency-svc-r8qlh
Dec 26 13:10:28.367: INFO: Got endpoints: latency-svc-r8qlh [3.347753132s]
Dec 26 13:10:28.374: INFO: Created: latency-svc-7nvx5
Dec 26 13:10:28.721: INFO: Got endpoints: latency-svc-7nvx5 [3.504598699s]
Dec 26 13:10:28.739: INFO: Created: latency-svc-wmf8k
Dec 26 13:10:28.759: INFO: Got endpoints: latency-svc-wmf8k [3.276739988s]
Dec 26 13:10:29.042: INFO: Created: latency-svc-sv8gv
Dec 26 13:10:29.061: INFO: Got endpoints: latency-svc-sv8gv [2.934424866s]
Dec 26 13:10:29.269: INFO: Created: latency-svc-8k2kt
Dec 26 13:10:29.302: INFO: Got endpoints: latency-svc-8k2kt [2.929959135s]
Dec 26 13:10:29.489: INFO: Created: latency-svc-5sfbn
Dec 26 13:10:29.529: INFO: Created: latency-svc-7l6vz
Dec 26 13:10:29.530: INFO: Got endpoints: latency-svc-5sfbn [3.012627665s]
Dec 26 13:10:29.823: INFO: Got endpoints: latency-svc-7l6vz [3.217981976s]
Dec 26 13:10:29.901: INFO: Created: latency-svc-p7wkg
Dec 26 13:10:30.020: INFO: Got endpoints: latency-svc-p7wkg [3.177989245s]
Dec 26 13:10:30.053: INFO: Created: latency-svc-gbwtn
Dec 26 13:10:30.071: INFO: Got endpoints: latency-svc-gbwtn [3.041435964s]
Dec 26 13:10:30.260: INFO: Created: latency-svc-fk62h
Dec 26 13:10:30.298: INFO: Got endpoints: latency-svc-fk62h [3.048763877s]
Dec 26 13:10:30.589: INFO: Created: latency-svc-227b6
Dec 26 13:10:30.875: INFO: Got endpoints: latency-svc-227b6 [3.578459027s]
Dec 26 13:10:30.888: INFO: Created: latency-svc-qqj4x
Dec 26 13:10:30.909: INFO: Got endpoints: latency-svc-qqj4x [3.355940141s]
Dec 26 13:10:31.136: INFO: Created: latency-svc-fkzfk
Dec 26 13:10:31.158: INFO: Got endpoints: latency-svc-fkzfk [3.550603199s]
Dec 26 13:10:31.216: INFO: Created: latency-svc-kbjzh
Dec 26 13:10:31.427: INFO: Got endpoints: latency-svc-kbjzh [3.568727949s]
Dec 26 13:10:31.503: INFO: Created: latency-svc-9nzcr
Dec 26 13:10:31.704: INFO: Created: latency-svc-s8n6q
Dec 26 13:10:31.705: INFO: Got endpoints: latency-svc-9nzcr [3.632363767s]
Dec 26 13:10:31.738: INFO: Got endpoints: latency-svc-s8n6q [3.369910913s]
Dec 26 13:10:32.012: INFO: Created: latency-svc-c4k54
Dec 26 13:10:32.041: INFO: Got endpoints: latency-svc-c4k54 [3.318839939s]
Dec 26 13:10:32.088: INFO: Created: latency-svc-68rrh
Dec 26 13:10:32.295: INFO: Created: latency-svc-tzcvs
Dec 26 13:10:32.527: INFO: Got endpoints: latency-svc-68rrh [3.766869257s]
Dec 26 13:10:32.539: INFO: Created: latency-svc-kwtjd
Dec 26 13:10:32.580: INFO: Got endpoints: latency-svc-tzcvs [3.519333643s]
Dec 26 13:10:32.584: INFO: Got endpoints: latency-svc-kwtjd [3.281694549s]
Dec 26 13:10:32.840: INFO: Created: latency-svc-cqr2d
Dec 26 13:10:32.914: INFO: Got endpoints: latency-svc-cqr2d [3.383949363s]
Dec 26 13:10:33.344: INFO: Created: latency-svc-rpgfb
Dec 26 13:10:33.378: INFO: Got endpoints: latency-svc-rpgfb [3.554111845s]
Dec 26 13:10:33.685: INFO: Created: latency-svc-z578g
Dec 26 13:10:33.693: INFO: Got endpoints: latency-svc-z578g [3.672173585s]
Dec 26 13:10:33.902: INFO: Created: latency-svc-8nxmr
Dec 26 13:10:33.960: INFO: Got endpoints: latency-svc-8nxmr [3.88923095s]
Dec 26 13:10:34.084: INFO: Created: latency-svc-n9hpg
Dec 26 13:10:34.106: INFO: Got endpoints: latency-svc-n9hpg [3.807543712s]
Dec 26 13:10:34.297: INFO: Created: latency-svc-dr9h7
Dec 26 13:10:34.567: INFO: Got endpoints: latency-svc-dr9h7 [3.691760851s]
Dec 26 13:10:34.632: INFO: Created: latency-svc-2gjqx
Dec 26 13:10:34.632: INFO: Got endpoints: latency-svc-2gjqx [3.722741923s]
Dec 26 13:10:34.859: INFO: Created: latency-svc-pphf4
Dec 26 13:10:34.883: INFO: Got endpoints: latency-svc-pphf4 [3.725542331s]
Dec 26 13:10:35.134: INFO: Created: latency-svc-hj29j
Dec 26 13:10:35.142: INFO: Got endpoints: latency-svc-hj29j [3.714529026s]
Dec 26 13:10:35.409: INFO: Created: latency-svc-9zmsn
Dec 26 13:10:35.423: INFO: Got endpoints: latency-svc-9zmsn [3.718393111s]
Dec 26 13:10:35.621: INFO: Created: latency-svc-8wjsv
Dec 26 13:10:35.627: INFO: Got endpoints: latency-svc-8wjsv [3.889227196s]
Dec 26 13:10:35.707: INFO: Created: latency-svc-5hzzg
Dec 26 13:10:35.971: INFO: Got endpoints: latency-svc-5hzzg [3.930078558s]
Dec 26 13:10:36.029: INFO: Created: latency-svc-n5shh
Dec 26 13:10:36.196: INFO: Got endpoints: latency-svc-n5shh [3.668743227s]
Dec 26 13:10:36.221: INFO: Created: latency-svc-v6wbh
Dec 26 13:10:36.273: INFO: Got endpoints: latency-svc-v6wbh [3.691790746s]
Dec 26 13:10:36.477: INFO: Created: latency-svc-mrhd5
Dec 26 13:10:36.590: INFO: Created: latency-svc-lg8qp
Dec 26 13:10:36.722: INFO: Got endpoints: latency-svc-mrhd5 [4.138024462s]
Dec 26 13:10:37.145: INFO: Got endpoints: latency-svc-lg8qp [4.231223301s]
Dec 26 13:10:37.160: INFO: Created: latency-svc-zhjxf
Dec 26 13:10:37.167: INFO: Got endpoints: latency-svc-zhjxf [3.789206057s]
Dec 26 13:10:37.245: INFO: Created: latency-svc-m467g
Dec 26 13:10:37.394: INFO: Got endpoints: latency-svc-m467g [3.700377183s]
Dec 26 13:10:37.410: INFO: Created: latency-svc-wf9r2
Dec 26 13:10:37.436: INFO: Got endpoints: latency-svc-wf9r2 [3.475912291s]
Dec 26 13:10:37.491: INFO: Created: latency-svc-v8cbt
Dec 26 13:10:37.650: INFO: Got endpoints: latency-svc-v8cbt [3.54409118s]
Dec 26 13:10:37.674: INFO: Created: latency-svc-d4tzx
Dec 26 13:10:37.688: INFO: Got endpoints: latency-svc-d4tzx [3.121046735s]
Dec 26 13:10:38.646: INFO: Created: latency-svc-2dxjj
Dec 26 13:10:38.813: INFO: Got endpoints: latency-svc-2dxjj [4.180379003s]
Dec 26 13:10:38.868: INFO: Created: latency-svc-ppmw2
Dec 26 13:10:39.028: INFO: Got endpoints: latency-svc-ppmw2 [4.144488841s]
Dec 26 13:10:39.135: INFO: Created: latency-svc-xrqsf
Dec 26 13:10:39.227: INFO: Got endpoints: latency-svc-xrqsf [4.085298121s]
Dec 26 13:10:39.364: INFO: Created: latency-svc-wmvx4
Dec 26 13:10:39.437: INFO: Got endpoints: latency-svc-wmvx4 [4.013892905s]
Dec 26 13:10:39.472: INFO: Created: latency-svc-f6d2l
Dec 26 13:10:39.510: INFO: Created: latency-svc-tnx2q
Dec 26 13:10:39.517: INFO: Got endpoints: latency-svc-f6d2l [3.88955907s]
Dec 26 13:10:39.734: INFO: Got endpoints: latency-svc-tnx2q [3.762297287s]
Dec 26 13:10:39.764: INFO: Created: latency-svc-lmmlj
Dec 26 13:10:39.788: INFO: Got endpoints: latency-svc-lmmlj [3.592442678s]
Dec 26 13:10:40.075: INFO: Created: latency-svc-m9pdm
Dec 26 13:10:40.075: INFO: Got endpoints: latency-svc-m9pdm [3.802253774s]
Dec 26 13:10:40.367: INFO: Created: latency-svc-xdtx6
Dec 26 13:10:40.420: INFO: Got endpoints: latency-svc-xdtx6 [3.6968537s]
Dec 26 13:10:40.847: INFO: Created: latency-svc-6rx5b
Dec 26 13:10:41.098: INFO: Got endpoints: latency-svc-6rx5b [3.952514149s]
Dec 26 13:10:41.135: INFO: Created: latency-svc-bzc2m
Dec 26 13:10:41.158: INFO: Got endpoints: latency-svc-bzc2m [3.991172872s]
Dec 26 13:10:41.194: INFO: Created: latency-svc-r68v7
Dec 26 13:10:41.305: INFO: Got endpoints: latency-svc-r68v7 [3.911103447s]
Dec 26 13:10:41.333: INFO: Created: latency-svc-4vfjr
Dec 26 13:10:41.352: INFO: Got endpoints: latency-svc-4vfjr [3.916042866s]
Dec 26 13:10:41.534: INFO: Created: latency-svc-kxfps
Dec 26 13:10:41.539: INFO: Got endpoints: latency-svc-kxfps [3.888850919s]
Dec 26 13:10:41.785: INFO: Created: latency-svc-42wmm
Dec 26 13:10:41.790: INFO: Got endpoints: latency-svc-42wmm [4.101992133s]
Dec 26 13:10:42.057: INFO: Created: latency-svc-9tpws
Dec 26 13:10:42.079: INFO: Got endpoints: latency-svc-9tpws [3.266041577s]
Dec 26 13:10:42.137: INFO: Created: latency-svc-w7k8h
Dec 26 13:10:42.266: INFO: Got endpoints: latency-svc-w7k8h [3.23798519s]
Dec 26 13:10:42.302: INFO: Created: latency-svc-jnqbr
Dec 26 13:10:42.364: INFO: Got endpoints: latency-svc-jnqbr [3.136177185s]
Dec 26 13:10:42.387: INFO: Created: latency-svc-wm59n
Dec 26 13:10:43.095: INFO: Created: latency-svc-lrhlf
Dec 26 13:10:43.118: INFO: Created: latency-svc-575rt
Dec 26 13:10:43.156: INFO: Got endpoints: latency-svc-575rt [3.421023871s]
Dec 26 13:10:43.156: INFO: Got endpoints: latency-svc-lrhlf [3.638753577s]
Dec 26 13:10:43.362: INFO: Got endpoints: latency-svc-wm59n [3.924327415s]
Dec 26 13:10:43.389: INFO: Created: latency-svc-lngpz
Dec 26 13:10:43.435: INFO: Got endpoints: latency-svc-lngpz [3.646442627s]
Dec 26 13:10:43.596: INFO: Created: latency-svc-w8jfn
Dec 26 13:10:43.637: INFO: Got endpoints: latency-svc-w8jfn [3.562035873s]
Dec 26 13:10:43.833: INFO: Created: latency-svc-vvvqk
Dec 26 13:10:43.894: INFO: Got endpoints: latency-svc-vvvqk [3.474459213s]
Dec 26 13:10:44.084: INFO: Created: latency-svc-rd4x5
Dec 26 13:10:44.181: INFO: Got endpoints: latency-svc-rd4x5 [3.08279481s]
Dec 26 13:10:44.291: INFO: Created: latency-svc-v4n8l
Dec 26 13:10:44.295: INFO: Got endpoints: latency-svc-v4n8l [3.136667632s]
Dec 26 13:10:44.406: INFO: Created: latency-svc-pnrln
Dec 26 13:10:44.692: INFO: Got endpoints: latency-svc-pnrln [3.386521396s]
Dec 26 13:10:44.704: INFO: Created: latency-svc-8g292
Dec 26 13:10:45.010: INFO: Got endpoints: latency-svc-8g292 [3.658022059s]
Dec 26 13:10:45.266: INFO: Created: latency-svc-p4cth
Dec 26 13:10:45.287: INFO: Got endpoints: latency-svc-p4cth [3.747071116s]
Dec 26 13:10:45.366: INFO: Created: latency-svc-lqk6d
Dec 26 13:10:45.516: INFO: Got endpoints: latency-svc-lqk6d [3.725141408s]
Dec 26 13:10:45.532: INFO: Created: latency-svc-q5wxt
Dec 26 13:10:45.712: INFO: Got endpoints: latency-svc-q5wxt [3.632450858s]
Dec 26 13:10:45.723: INFO: Created: latency-svc-52w9t
Dec 26 13:10:45.733: INFO: Got endpoints: latency-svc-52w9t [3.465944584s]
Dec 26 13:10:46.019: INFO: Created: latency-svc-hbm4d
Dec 26 13:10:46.029: INFO: Got endpoints: latency-svc-hbm4d [3.665454872s]
Dec 26 13:10:46.277: INFO: Created: latency-svc-5ft2v
Dec 26 13:10:46.300: INFO: Got endpoints: latency-svc-5ft2v [3.1439294s]
Dec 26 13:10:46.468: INFO: Created: latency-svc-bhcvs
Dec 26 13:10:46.516: INFO: Got endpoints: latency-svc-bhcvs [3.360483917s]
Dec 26 13:10:46.737: INFO: Created: latency-svc-8gvsh
Dec 26 13:10:46.740: INFO: Got endpoints: latency-svc-8gvsh [3.377527557s]
Dec 26 13:10:46.843: INFO: Created: latency-svc-9wznm
Dec 26 13:10:46.978: INFO: Got endpoints: latency-svc-9wznm [3.54227246s]
Dec 26 13:10:47.170: INFO: Created: latency-svc-n6jng
Dec 26 13:10:47.187: INFO: Got endpoints: latency-svc-n6jng [3.548904418s]
Dec 26 13:10:47.294: INFO: Created: latency-svc-p2mvq
Dec 26 13:10:47.381: INFO: Got endpoints: latency-svc-p2mvq [3.486201358s]
Dec 26 13:10:47.426: INFO: Created: latency-svc-6dqwm
Dec 26 13:10:47.448: INFO: Got endpoints: latency-svc-6dqwm [3.266342045s]
Dec 26 13:10:47.652: INFO: Created: latency-svc-vvsgm
Dec 26 13:10:47.766: INFO: Got endpoints: latency-svc-vvsgm [3.470570437s]
Dec 26 13:10:47.838: INFO: Created: latency-svc-btklp
Dec 26 13:10:48.016: INFO: Created: latency-svc-rn7nk
Dec 26 13:10:48.060: INFO: Got endpoints: latency-svc-btklp [3.367951219s]
Dec 26 13:10:48.060: INFO: Got endpoints: latency-svc-rn7nk [3.048904207s]
Dec 26 13:10:48.196: INFO: Created: latency-svc-gdcs2
Dec 26 13:10:48.223: INFO: Got endpoints: latency-svc-gdcs2 [2.936099235s]
Dec 26 13:10:48.411: INFO: Created: latency-svc-djf5s
Dec 26 13:10:48.462: INFO: Got endpoints: latency-svc-djf5s [2.945717172s]
Dec 26 13:10:48.632: INFO: Created: latency-svc-5pjrf
Dec 26 13:10:48.717: INFO: Got endpoints: latency-svc-5pjrf [3.004786215s]
Dec 26 13:10:48.738: INFO: Created: latency-svc-xqq8s
Dec 26 13:10:48.886: INFO: Got endpoints: latency-svc-xqq8s [3.15291124s]
Dec 26 13:10:49.140: INFO: Created: latency-svc-qqpl9
Dec 26 13:10:49.181: INFO: Got endpoints: latency-svc-qqpl9 [3.151604606s]
Dec 26 13:10:51.694: INFO: Created: latency-svc-frfn9
Dec 26 13:10:51.721: INFO: Got endpoints: latency-svc-frfn9 [5.420972305s]
Dec 26 13:10:51.969: INFO: Created: latency-svc-d4tkj
Dec 26 13:10:51.978: INFO: Got endpoints: latency-svc-d4tkj [5.461070462s]
Dec 26 13:10:52.194: INFO: Created: latency-svc-wvl7n
Dec 26 13:10:52.346: INFO: Got endpoints: latency-svc-wvl7n [5.606046068s]
Dec 26 13:10:52.378: INFO: Created: latency-svc-sx5kl
Dec 26 13:10:52.399: INFO: Got endpoints: latency-svc-sx5kl [5.421122785s]
Dec 26 13:10:52.642: INFO: Created: latency-svc-ckbl6
Dec 26 13:10:52.687: INFO: Got endpoints: latency-svc-ckbl6 [5.500171459s]
Dec 26 13:10:52.873: INFO: Created: latency-svc-kkn9q
Dec 26 13:10:52.906: INFO: Got endpoints: latency-svc-kkn9q [5.524190747s]
Dec 26 13:10:53.109: INFO: Created: latency-svc-d8d9x
Dec 26 13:10:53.150: INFO: Got endpoints: latency-svc-d8d9x [5.702066775s]
Dec 26 13:10:53.189: INFO: Created: latency-svc-brlvf
Dec 26 13:10:53.353: INFO: Created: latency-svc-6vmz8
Dec 26 13:10:53.367: INFO: Got endpoints: latency-svc-brlvf [5.60134514s]
Dec 26 13:10:53.386: INFO: Got endpoints: latency-svc-6vmz8 [5.325456166s]
Dec 26 13:10:53.406: INFO: Created: latency-svc-ndg6m
Dec 26 13:10:53.424: INFO: Got endpoints: latency-svc-ndg6m [5.364299749s]
Dec 26 13:10:53.646: INFO: Created: latency-svc-6xj99
Dec 26 13:10:53.654: INFO: Got endpoints: latency-svc-6xj99 [5.430442663s]
Dec 26 13:10:54.181: INFO: Created: latency-svc-qf7n2
Dec 26 13:10:54.187: INFO: Got endpoints: latency-svc-qf7n2 [5.725084878s]
Dec 26 13:10:54.253: INFO: Created: latency-svc-tqnv5
Dec 26 13:10:54.465: INFO: Got endpoints: latency-svc-tqnv5 [5.748098637s]
Dec 26 13:10:54.494: INFO: Created: latency-svc-sttpk
Dec 26 13:10:54.796: INFO: Got endpoints: latency-svc-sttpk [5.909959113s]
Dec 26 13:10:54.797: INFO: Created: latency-svc-gbshz
Dec 26 13:10:54.805: INFO: Got endpoints: latency-svc-gbshz [5.62340562s]
Dec 26 13:10:55.117: INFO: Created: latency-svc-2cfhv
Dec 26 13:10:55.292: INFO: Got endpoints: latency-svc-2cfhv [3.570997137s]
Dec 26 13:10:55.310: INFO: Created: latency-svc-kpqkc
Dec 26 13:10:55.324: INFO: Got endpoints: latency-svc-kpqkc [3.346120583s]
Dec 26 13:10:55.489: INFO: Created: latency-svc-qzwc2
Dec 26 13:10:55.510: INFO: Got endpoints: latency-svc-qzwc2 [3.163555406s]
Dec 26 13:10:55.744: INFO: Created: latency-svc-t6dwr
Dec 26 13:10:55.765: INFO: Got endpoints: latency-svc-t6dwr [3.365202557s]
Dec 26 13:10:56.038: INFO: Created: latency-svc-lbczs
Dec 26 13:10:56.063: INFO: Got endpoints: latency-svc-lbczs [3.375934686s]
Dec 26 13:10:56.376: INFO: Created: latency-svc-dxcpr
Dec 26 13:10:56.438: INFO: Got endpoints: latency-svc-dxcpr [3.531827728s]
Dec 26 13:10:56.632: INFO: Created: latency-svc-mz5tv
Dec 26 13:10:56.659: INFO: Got endpoints: latency-svc-mz5tv [3.508580112s]
Dec 26 13:10:56.753: INFO: Created: latency-svc-wbr6d
Dec 26 13:10:56.756: INFO: Got endpoints: latency-svc-wbr6d [3.388630301s]
Dec 26 13:10:56.941: INFO: Created: latency-svc-gbjp4
Dec 26 13:10:57.207: INFO: Got endpoints: latency-svc-gbjp4 [3.820807188s]
Dec 26 13:10:57.246: INFO: Created: latency-svc-zcgt9
Dec 26 13:10:57.266: INFO: Got endpoints: latency-svc-zcgt9 [3.841502075s]
Dec 26 13:10:57.505: INFO: Created: latency-svc-65gc8
Dec 26 13:10:57.563: INFO: Got endpoints: latency-svc-65gc8 [3.909294366s]
Dec 26 13:10:57.567: INFO: Created: latency-svc-mcxqj
Dec 26 13:10:57.735: INFO: Got endpoints: latency-svc-mcxqj [3.547263083s]
Dec 26 13:10:57.784: INFO: Created: latency-svc-zz8ht
Dec 26 13:10:57.811: INFO: Got endpoints: latency-svc-zz8ht [3.345359167s]
Dec 26 13:10:57.922: INFO: Created: latency-svc-krgz5
Dec 26 13:10:57.952: INFO: Got endpoints: latency-svc-krgz5 [3.155116466s]
Dec 26 13:10:58.158: INFO: Created: latency-svc-rrwjq
Dec 26 13:10:58.225: INFO: Created: latency-svc-l6tpv
Dec 26 13:10:58.244: INFO: Got endpoints: latency-svc-rrwjq [3.439520901s]
Dec 26 13:10:58.338: INFO: Got endpoints: latency-svc-l6tpv [3.045564218s]
Dec 26 13:10:58.385: INFO: Created: latency-svc-xhgbk
Dec 26 13:10:58.399: INFO: Got endpoints: latency-svc-xhgbk [3.075276862s]
Dec 26 13:10:58.618: INFO: Created: latency-svc-8hhj5
Dec 26 13:10:58.620: INFO: Got endpoints: latency-svc-8hhj5 [3.109658361s]
Dec 26 13:10:58.823: INFO: Created: latency-svc-r627n
Dec 26 13:10:58.877: INFO: Got endpoints: latency-svc-r627n [3.112267053s]
Dec 26 13:10:59.101: INFO: Created: latency-svc-f2qqh
Dec 26 13:10:59.125: INFO: Got endpoints: latency-svc-f2qqh [3.062058239s]
Dec 26 13:10:59.205: INFO: Created: latency-svc-jj2vc
Dec 26 13:10:59.296: INFO: Got endpoints: latency-svc-jj2vc [2.858232794s]
Dec 26 13:10:59.321: INFO: Created: latency-svc-rt4q7
Dec 26 13:10:59.331: INFO: Got endpoints: latency-svc-rt4q7 [2.671724573s]
Dec 26 13:10:59.411: INFO: Created: latency-svc-sqnnc
Dec 26 13:10:59.498: INFO: Got endpoints: latency-svc-sqnnc [2.741513923s]
Dec 26 13:10:59.531: INFO: Created: latency-svc-qhmr2
Dec 26 13:10:59.572: INFO: Got endpoints: latency-svc-qhmr2 [2.365250961s]
Dec 26 13:10:59.589: INFO: Created: latency-svc-k2tx7
Dec 26 13:10:59.694: INFO: Got endpoints: latency-svc-k2tx7 [2.427345472s]
Dec 26 13:10:59.731: INFO: Created: latency-svc-476cx
Dec 26 13:10:59.731: INFO: Got endpoints: latency-svc-476cx [2.167160927s]
Dec 26 13:10:59.762: INFO: Created: latency-svc-9gb2k
Dec 26 13:10:59.771: INFO: Got endpoints: latency-svc-9gb2k [2.036108551s]
Dec 26 13:10:59.920: INFO: Created: latency-svc-6pf9z
Dec 26 13:10:59.922: INFO: Got endpoints: latency-svc-6pf9z [2.110389263s]
Dec 26 13:11:00.079: INFO: Created: latency-svc-pcvmj
Dec 26 13:11:00.135: INFO: Created: latency-svc-q4x7r
Dec 26 13:11:00.137: INFO: Got endpoints: latency-svc-pcvmj [2.185654523s]
Dec 26 13:11:00.149: INFO: Got endpoints: latency-svc-q4x7r [1.904855872s]
Dec 26 13:11:00.302: INFO: Created: latency-svc-pftzv
Dec 26 13:11:00.302: INFO: Got endpoints: latency-svc-pftzv [1.963196887s]
Dec 26 13:11:00.602: INFO: Created: latency-svc-wh4ph
Dec 26 13:11:00.794: INFO: Got endpoints: latency-svc-wh4ph [2.39442789s]
Dec 26 13:11:00.828: INFO: Created: latency-svc-kpdd6
Dec 26 13:11:00.849: INFO: Got endpoints: latency-svc-kpdd6 [2.229090525s]
Dec 26 13:11:01.028: INFO: Created: latency-svc-jhszm
Dec 26 13:11:01.063: INFO: Got endpoints: latency-svc-jhszm [2.185322483s]
Dec 26 13:11:01.225: INFO: Created: latency-svc-brnsz
Dec 26 13:11:01.254: INFO: Got endpoints: latency-svc-brnsz [2.128216971s]
Dec 26 13:11:01.405: INFO: Created: latency-svc-gwkwl
Dec 26 13:11:01.422: INFO: Got endpoints: latency-svc-gwkwl [2.12548043s]
Dec 26 13:11:01.495: INFO: Created: latency-svc-jnfpz
Dec 26 13:11:01.643: INFO: Got endpoints: latency-svc-jnfpz [2.311773068s]
Dec 26 13:11:01.658: INFO: Created: latency-svc-8m8t8
Dec 26 13:11:01.674: INFO: Got endpoints: latency-svc-8m8t8 [2.175647309s]
Dec 26 13:11:01.775: INFO: Created: latency-svc-chn4m
Dec 26 13:11:01.816: INFO: Got endpoints: latency-svc-chn4m [2.243548056s]
Dec 26 13:11:02.063: INFO: Created: latency-svc-rz7wl
Dec 26 13:11:02.151: INFO: Got endpoints: latency-svc-rz7wl [2.457361847s]
Dec 26 13:11:02.290: INFO: Created: latency-svc-9b5tb
Dec 26 13:11:02.325: INFO: Got endpoints: latency-svc-9b5tb [2.594578015s]
Dec 26 13:11:02.515: INFO: Created: latency-svc-zqp98
Dec 26 13:11:02.603: INFO: Got endpoints: latency-svc-zqp98 [2.831616324s]
Dec 26 13:11:02.910: INFO: Created: latency-svc-vmf92
Dec 26 13:11:02.988: INFO: Got endpoints: latency-svc-vmf92 [3.065778161s]
Dec 26 13:11:03.130: INFO: Created: latency-svc-nrfhx
Dec 26 13:11:03.153: INFO: Got endpoints: latency-svc-nrfhx [3.015289257s]
Dec 26 13:11:03.226: INFO: Created: latency-svc-jpqqh
Dec 26 13:11:03.384: INFO: Got endpoints: latency-svc-jpqqh [3.233912729s]
Dec 26 13:11:03.397: INFO: Created: latency-svc-njl8r
Dec 26 13:11:03.426: INFO: Got endpoints: latency-svc-njl8r [3.124550903s]
Dec 26 13:11:03.496: INFO: Created: latency-svc-lpnrz
Dec 26 13:11:03.655: INFO: Got endpoints: latency-svc-lpnrz [2.861264686s]
Dec 26 13:11:03.701: INFO: Created: latency-svc-r8567
Dec 26 13:11:03.720: INFO: Got endpoints: latency-svc-r8567 [2.871010386s]
Dec 26 13:11:03.925: INFO: Created: latency-svc-4j87w
Dec 26 13:11:03.942: INFO: Got endpoints: latency-svc-4j87w [2.878403183s]
Dec 26 13:11:04.111: INFO: Created: latency-svc-dcsqj
Dec 26 13:11:04.239: INFO: Got endpoints: latency-svc-dcsqj [2.985199597s]
Dec 26 13:11:04.247: INFO: Created: latency-svc-fn6fg
Dec 26 13:11:04.256: INFO: Got endpoints: latency-svc-fn6fg [2.83381875s]
Dec 26 13:11:04.482: INFO: Created: latency-svc-btf67
Dec 26 13:11:04.495: INFO: Got endpoints: latency-svc-btf67 [2.85219873s]
Dec 26 13:11:04.737: INFO: Created: latency-svc-845m9
Dec 26 13:11:04.749: INFO: Got endpoints: latency-svc-845m9 [3.075033993s]
Dec 26 13:11:04.830: INFO: Created: latency-svc-zssmb
Dec 26 13:11:05.100: INFO: Got endpoints: latency-svc-zssmb [3.28421615s]
Dec 26 13:11:05.149: INFO: Created: latency-svc-qxvcr
Dec 26 13:11:05.319: INFO: Got endpoints: latency-svc-qxvcr [3.167017313s]
Dec 26 13:11:05.349: INFO: Created: latency-svc-cnpd2
Dec 26 13:11:05.388: INFO: Got endpoints: latency-svc-cnpd2 [3.062829549s]
Dec 26 13:11:05.580: INFO: Created: latency-svc-9jghm
Dec 26 13:11:05.589: INFO: Got endpoints: latency-svc-9jghm [2.985474506s]
Dec 26 13:11:05.662: INFO: Created: latency-svc-7rmzm
Dec 26 13:11:05.800: INFO: Got endpoints: latency-svc-7rmzm [2.812019918s]
Dec 26 13:11:05.819: INFO: Created: latency-svc-5qm48
Dec 26 13:11:05.846: INFO: Got endpoints: latency-svc-5qm48 [2.692699987s]
Dec 26 13:11:06.154: INFO: Created: latency-svc-bfg8f
Dec 26 13:11:06.174: INFO: Got endpoints: latency-svc-bfg8f [2.789891161s]
Dec 26 13:11:06.312: INFO: Created: latency-svc-p8dvg
Dec 26 13:11:06.322: INFO: Got endpoints: latency-svc-p8dvg [2.89523148s]
Dec 26 13:11:06.497: INFO: Created: latency-svc-w7r8c
Dec 26 13:11:06.684: INFO: Got endpoints: latency-svc-w7r8c [3.027619831s]
Dec 26 13:11:06.757: INFO: Created: latency-svc-rmvmr
Dec 26 13:11:06.881: INFO: Got endpoints: latency-svc-rmvmr [3.160799029s]
Dec 26 13:11:06.882: INFO: Latencies: [265.405926ms 413.907626ms 507.504817ms 592.092178ms 896.035353ms 1.026981041s 1.05997602s 1.261642105s 1.314728599s 1.519881756s 1.59578574s 1.904855872s 1.939448696s 1.963196887s 2.036108551s 2.110389263s 2.12548043s 2.128216971s 2.167160927s 2.175647309s 2.176207678s 2.185322483s 2.185654523s 2.229090525s 2.231389797s 2.243548056s 2.253416725s 2.311773068s 2.361145067s 2.365250961s 2.388051203s 2.39442789s 2.396445678s 2.427345472s 2.457361847s 2.48237769s 2.508103142s 2.594578015s 2.654749185s 2.671724573s 2.692699987s 2.737536197s 2.741513923s 2.789891161s 2.798552143s 2.812019918s 2.831616324s 2.83381875s 2.85219873s 2.858232794s 2.861264686s 2.871010386s 2.878403183s 2.887856234s 2.89523148s 2.929959135s 2.934424866s 2.936099235s 2.945717172s 2.985199597s 2.985474506s 2.99750769s 3.004786215s 3.00567663s 3.012627665s 3.015289257s 3.018292725s 3.022177781s 3.027619831s 3.041435964s 3.043214277s 3.045564218s 3.048763877s 3.048904207s 3.062058239s 3.062829549s 3.065778161s 3.075033993s 3.075276862s 3.082608245s 3.08279481s 3.092204381s 3.109658361s 3.112267053s 3.121046735s 3.124550903s 3.136177185s 3.136458497s 3.136667632s 3.1439294s 3.151604606s 3.15291124s 3.155116466s 3.160799029s 3.163555406s 3.167017313s 3.177989245s 3.178826819s 3.217981976s 3.233912729s 3.23798519s 3.266041577s 3.266342045s 3.276739988s 3.281694549s 3.28421615s 3.318839939s 3.345359167s 3.346120583s 3.347753132s 3.355940141s 3.360483917s 3.365202557s 3.367951219s 3.369910913s 3.375934686s 3.377527557s 3.383949363s 3.386521396s 3.388630301s 3.421023871s 3.439520901s 3.465944584s 3.470570437s 3.474459213s 3.475912291s 3.486201358s 3.504598699s 3.508580112s 3.519333643s 3.531827728s 3.54227246s 3.54409118s 3.547263083s 3.548904418s 3.550603199s 3.554111845s 3.562035873s 3.568727949s 3.570997137s 3.578459027s 3.592442678s 3.632363767s 3.632450858s 3.638753577s 3.646442627s 3.658022059s 3.665454872s 3.668743227s 3.672173585s 3.691760851s 3.691790746s 3.6968537s 3.700377183s 3.714529026s 3.718393111s 3.722741923s 3.725141408s 3.725542331s 3.747071116s 3.762297287s 3.766869257s 3.789206057s 3.802253774s 3.807543712s 3.820807188s 3.841502075s 3.888850919s 3.889227196s 3.88923095s 3.88955907s 3.909294366s 3.911103447s 3.916042866s 3.924327415s 3.930078558s 3.952514149s 3.991172872s 4.013892905s 4.085298121s 4.101992133s 4.138024462s 4.144488841s 4.180379003s 4.231223301s 5.325456166s 5.364299749s 5.420972305s 5.421122785s 5.430442663s 5.461070462s 5.500171459s 5.524190747s 5.60134514s 5.606046068s 5.62340562s 5.702066775s 5.725084878s 5.748098637s 5.909959113s]
Dec 26 13:11:06.883: INFO: 50 %ile: 3.23798519s
Dec 26 13:11:06.883: INFO: 90 %ile: 4.101992133s
Dec 26 13:11:06.883: INFO: 99 %ile: 5.748098637s
Dec 26 13:11:06.883: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:11:06.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-wbhtr" for this suite.
Dec 26 13:12:17.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:12:17.184: INFO: namespace: e2e-tests-svc-latency-wbhtr, resource: bindings, ignored listing per whitelist
Dec 26 13:12:17.206: INFO: namespace e2e-tests-svc-latency-wbhtr deletion completed in 1m10.273885533s

• [SLOW TEST:128.314 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:12:17.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 13:12:17.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-downward-api-qtn6z" to be "success or failure"
Dec 26 13:12:17.395: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042713ms
Dec 26 13:12:19.411: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023866999s
Dec 26 13:12:21.458: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070723258s
Dec 26 13:12:25.022: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.6349778s
Dec 26 13:12:27.066: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.67893856s
Dec 26 13:12:29.098: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.711315295s
Dec 26 13:12:31.127: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.740058781s
STEP: Saw pod success
Dec 26 13:12:31.127: INFO: Pod "downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:12:31.148: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 13:12:31.339: INFO: Waiting for pod downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005 to disappear
Dec 26 13:12:31.421: INFO: Pod downwardapi-volume-5748affd-27e1-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:12:31.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qtn6z" for this suite.
Dec 26 13:12:37.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:12:37.538: INFO: namespace: e2e-tests-downward-api-qtn6z, resource: bindings, ignored listing per whitelist
Dec 26 13:12:37.592: INFO: namespace e2e-tests-downward-api-qtn6z deletion completed in 6.161033783s

• [SLOW TEST:20.386 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:12:37.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 26 13:12:37.776: INFO: Waiting up to 5m0s for pod "pod-63704967-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-6vbgq" to be "success or failure"
Dec 26 13:12:37.790: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068406ms
Dec 26 13:12:40.144: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368214364s
Dec 26 13:12:42.175: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398785241s
Dec 26 13:12:44.541: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764794331s
Dec 26 13:12:46.653: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877079129s
Dec 26 13:12:48.710: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934135581s
Dec 26 13:12:50.733: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.957166865s
STEP: Saw pod success
Dec 26 13:12:50.733: INFO: Pod "pod-63704967-27e1-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:12:50.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-63704967-27e1-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 13:12:52.605: INFO: Waiting for pod pod-63704967-27e1-11ea-948a-0242ac110005 to disappear
Dec 26 13:12:52.652: INFO: Pod pod-63704967-27e1-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:12:52.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6vbgq" for this suite.
Dec 26 13:12:58.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:12:58.910: INFO: namespace: e2e-tests-emptydir-6vbgq, resource: bindings, ignored listing per whitelist
Dec 26 13:12:58.954: INFO: namespace e2e-tests-emptydir-6vbgq deletion completed in 6.291036499s

• [SLOW TEST:21.362 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:12:58.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 26 13:12:59.212: INFO: Waiting up to 5m0s for pod "pod-70324f20-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-emptydir-zc6ct" to be "success or failure"
Dec 26 13:12:59.306: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.189664ms
Dec 26 13:13:01.413: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200854144s
Dec 26 13:13:03.424: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212112551s
Dec 26 13:13:06.121: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908284506s
Dec 26 13:13:08.326: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.113525259s
Dec 26 13:13:10.341: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.128574235s
Dec 26 13:13:12.381: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.16894962s
STEP: Saw pod success
Dec 26 13:13:12.382: INFO: Pod "pod-70324f20-27e1-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:13:12.422: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-70324f20-27e1-11ea-948a-0242ac110005 container test-container: 
STEP: delete the pod
Dec 26 13:13:12.682: INFO: Waiting for pod pod-70324f20-27e1-11ea-948a-0242ac110005 to disappear
Dec 26 13:13:12.709: INFO: Pod pod-70324f20-27e1-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:13:12.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zc6ct" for this suite.
Dec 26 13:13:18.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:13:19.020: INFO: namespace: e2e-tests-emptydir-zc6ct, resource: bindings, ignored listing per whitelist
Dec 26 13:13:19.092: INFO: namespace e2e-tests-emptydir-zc6ct deletion completed in 6.341135132s

• [SLOW TEST:20.138 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:13:19.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-7c3cd429-27e1-11ea-948a-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 26 13:13:19.464: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-configmap-cq5l7" to be "success or failure"
Dec 26 13:13:19.495: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.364034ms
Dec 26 13:13:21.730: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266538856s
Dec 26 13:13:23.751: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287213124s
Dec 26 13:13:25.778: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313875469s
Dec 26 13:13:27.935: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470869106s
Dec 26 13:13:29.972: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.507829939s
Dec 26 13:13:31.990: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.525805214s
STEP: Saw pod success
Dec 26 13:13:31.990: INFO: Pod "pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:13:32.010: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 26 13:13:32.249: INFO: Waiting for pod pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005 to disappear
Dec 26 13:13:32.293: INFO: Pod pod-configmaps-7c3d7c59-27e1-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:13:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cq5l7" for this suite.
Dec 26 13:13:38.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:13:38.774: INFO: namespace: e2e-tests-configmap-cq5l7, resource: bindings, ignored listing per whitelist
Dec 26 13:13:38.920: INFO: namespace e2e-tests-configmap-cq5l7 deletion completed in 6.602015666s

• [SLOW TEST:19.828 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:13:38.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 26 13:13:39.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-projected-8c8jb" to be "success or failure"
Dec 26 13:13:39.143: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.00452ms
Dec 26 13:13:41.677: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.551017618s
Dec 26 13:13:43.713: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587091522s
Dec 26 13:13:47.962: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.83643926s
Dec 26 13:13:50.182: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056433583s
Dec 26 13:13:52.204: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.077634688s
Dec 26 13:13:54.225: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.099510239s
STEP: Saw pod success
Dec 26 13:13:54.226: INFO: Pod "downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005" satisfied condition "success or failure"
Dec 26 13:13:54.242: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005 container client-container: 
STEP: delete the pod
Dec 26 13:13:54.500: INFO: Waiting for pod downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005 to disappear
Dec 26 13:13:54.517: INFO: Pod downwardapi-volume-88008527-27e1-11ea-948a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:13:54.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8c8jb" for this suite.
Dec 26 13:14:00.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:14:00.639: INFO: namespace: e2e-tests-projected-8c8jb, resource: bindings, ignored listing per whitelist
Dec 26 13:14:00.687: INFO: namespace e2e-tests-projected-8c8jb deletion completed in 6.156062015s

• [SLOW TEST:21.767 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:14:00.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 26 13:14:00.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:01.276: INFO: stderr: ""
Dec 26 13:14:01.276: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 13:14:01.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:01.480: INFO: stderr: ""
Dec 26 13:14:01.480: INFO: stdout: "update-demo-nautilus-4wqzm update-demo-nautilus-fhxrb "
Dec 26 13:14:01.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wqzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:01.644: INFO: stderr: ""
Dec 26 13:14:01.644: INFO: stdout: ""
Dec 26 13:14:01.644: INFO: update-demo-nautilus-4wqzm is created but not running
Dec 26 13:14:06.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:06.793: INFO: stderr: ""
Dec 26 13:14:06.793: INFO: stdout: "update-demo-nautilus-4wqzm update-demo-nautilus-fhxrb "
Dec 26 13:14:06.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wqzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:06.967: INFO: stderr: ""
Dec 26 13:14:06.967: INFO: stdout: ""
Dec 26 13:14:06.967: INFO: update-demo-nautilus-4wqzm is created but not running
Dec 26 13:14:11.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:13.222: INFO: stderr: ""
Dec 26 13:14:13.222: INFO: stdout: "update-demo-nautilus-4wqzm update-demo-nautilus-fhxrb "
Dec 26 13:14:13.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wqzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:14.084: INFO: stderr: ""
Dec 26 13:14:14.084: INFO: stdout: ""
Dec 26 13:14:14.084: INFO: update-demo-nautilus-4wqzm is created but not running
Dec 26 13:14:19.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:19.293: INFO: stderr: ""
Dec 26 13:14:19.293: INFO: stdout: "update-demo-nautilus-4wqzm update-demo-nautilus-fhxrb "
Dec 26 13:14:19.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wqzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:19.445: INFO: stderr: ""
Dec 26 13:14:19.445: INFO: stdout: "true"
Dec 26 13:14:19.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wqzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:19.572: INFO: stderr: ""
Dec 26 13:14:19.572: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 13:14:19.572: INFO: validating pod update-demo-nautilus-4wqzm
Dec 26 13:14:19.617: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 13:14:19.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 13:14:19.617: INFO: update-demo-nautilus-4wqzm is verified up and running
Dec 26 13:14:19.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fhxrb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:19.778: INFO: stderr: ""
Dec 26 13:14:19.778: INFO: stdout: "true"
Dec 26 13:14:19.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fhxrb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:19.940: INFO: stderr: ""
Dec 26 13:14:19.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 13:14:19.940: INFO: validating pod update-demo-nautilus-fhxrb
Dec 26 13:14:19.952: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 13:14:19.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 13:14:19.952: INFO: update-demo-nautilus-fhxrb is verified up and running
STEP: using delete to clean up resources
Dec 26 13:14:19.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:20.130: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 26 13:14:20.130: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 26 13:14:20.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8tsrt'
Dec 26 13:14:20.303: INFO: stderr: "No resources found.\n"
Dec 26 13:14:20.303: INFO: stdout: ""
Dec 26 13:14:20.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8tsrt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 26 13:14:20.424: INFO: stderr: ""
Dec 26 13:14:20.424: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:14:20.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8tsrt" for this suite.
Dec 26 13:14:42.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:14:42.669: INFO: namespace: e2e-tests-kubectl-8tsrt, resource: bindings, ignored listing per whitelist
Dec 26 13:14:42.709: INFO: namespace e2e-tests-kubectl-8tsrt deletion completed in 22.266092344s

• [SLOW TEST:42.021 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:14:42.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 26 13:14:42.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:43.412: INFO: stderr: ""
Dec 26 13:14:43.412: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 13:14:43.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:43.651: INFO: stderr: ""
Dec 26 13:14:43.651: INFO: stdout: "update-demo-nautilus-rwr9c "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 26 13:14:48.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:48.825: INFO: stderr: ""
Dec 26 13:14:48.825: INFO: stdout: "update-demo-nautilus-qjdh4 update-demo-nautilus-rwr9c "
Dec 26 13:14:48.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjdh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:49.001: INFO: stderr: ""
Dec 26 13:14:49.001: INFO: stdout: ""
Dec 26 13:14:49.002: INFO: update-demo-nautilus-qjdh4 is created but not running
Dec 26 13:14:54.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:56.672: INFO: stderr: ""
Dec 26 13:14:56.672: INFO: stdout: "update-demo-nautilus-qjdh4 update-demo-nautilus-rwr9c "
Dec 26 13:14:56.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjdh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:56.836: INFO: stderr: ""
Dec 26 13:14:56.836: INFO: stdout: "true"
Dec 26 13:14:56.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjdh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:57.128: INFO: stderr: ""
Dec 26 13:14:57.128: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 13:14:57.128: INFO: validating pod update-demo-nautilus-qjdh4
Dec 26 13:14:57.175: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 13:14:57.175: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 13:14:57.175: INFO: update-demo-nautilus-qjdh4 is verified up and running
Dec 26 13:14:57.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rwr9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:57.333: INFO: stderr: ""
Dec 26 13:14:57.333: INFO: stdout: "true"
Dec 26 13:14:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rwr9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:14:57.430: INFO: stderr: ""
Dec 26 13:14:57.430: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 13:14:57.430: INFO: validating pod update-demo-nautilus-rwr9c
Dec 26 13:14:57.447: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 13:14:57.447: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 13:14:57.447: INFO: update-demo-nautilus-rwr9c is verified up and running
STEP: rolling-update to new replication controller
Dec 26 13:14:57.449: INFO: scanned /root for discovery docs: 
Dec 26 13:14:57.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.108: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 26 13:15:34.109: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 13:15:34.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.272: INFO: stderr: ""
Dec 26 13:15:34.273: INFO: stdout: "update-demo-kitten-svwhd update-demo-kitten-svxzw "
Dec 26 13:15:34.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-svwhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.404: INFO: stderr: ""
Dec 26 13:15:34.404: INFO: stdout: "true"
Dec 26 13:15:34.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-svwhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.524: INFO: stderr: ""
Dec 26 13:15:34.524: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 26 13:15:34.524: INFO: validating pod update-demo-kitten-svwhd
Dec 26 13:15:34.589: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 26 13:15:34.589: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 26 13:15:34.590: INFO: update-demo-kitten-svwhd is verified up and running
Dec 26 13:15:34.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-svxzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.751: INFO: stderr: ""
Dec 26 13:15:34.751: INFO: stdout: "true"
Dec 26 13:15:34.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-svxzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqm2t'
Dec 26 13:15:34.870: INFO: stderr: ""
Dec 26 13:15:34.870: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 26 13:15:34.870: INFO: validating pod update-demo-kitten-svxzw
Dec 26 13:15:34.879: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 26 13:15:34.879: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 26 13:15:34.879: INFO: update-demo-kitten-svxzw is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:15:34.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rqm2t" for this suite.
Dec 26 13:16:00.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:16:01.431: INFO: namespace: e2e-tests-kubectl-rqm2t, resource: bindings, ignored listing per whitelist
Dec 26 13:16:01.445: INFO: namespace e2e-tests-kubectl-rqm2t deletion completed in 26.561800898s

• [SLOW TEST:78.736 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 26 13:16:01.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 26 13:16:16.528: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dd01dc3b-27e1-11ea-948a-0242ac110005"
Dec 26 13:16:16.528: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dd01dc3b-27e1-11ea-948a-0242ac110005" in namespace "e2e-tests-pods-8zqtf" to be "terminated due to deadline exceeded"
Dec 26 13:16:16.565: INFO: Pod "pod-update-activedeadlineseconds-dd01dc3b-27e1-11ea-948a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 36.762152ms
Dec 26 13:16:18.587: INFO: Pod "pod-update-activedeadlineseconds-dd01dc3b-27e1-11ea-948a-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058521952s
Dec 26 13:16:18.587: INFO: Pod "pod-update-activedeadlineseconds-dd01dc3b-27e1-11ea-948a-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 26 13:16:18.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8zqtf" for this suite.
Dec 26 13:16:25.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 13:16:25.866: INFO: namespace: e2e-tests-pods-8zqtf, resource: bindings, ignored listing per whitelist
Dec 26 13:16:25.880: INFO: namespace e2e-tests-pods-8zqtf deletion completed in 7.282255018s

• [SLOW TEST:24.434 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSDec 26 13:16:25.880: INFO: Running AfterSuite actions on all nodes
Dec 26 13:16:25.880: INFO: Running AfterSuite actions on node 1
Dec 26 13:16:25.880: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8948.519 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS