I0223 10:47:13.147880 8 e2e.go:224] Starting e2e run "d90e5147-5629-11ea-8363-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582454832 - Will randomize all specs Will run 201 of 2164 specs Feb 23 10:47:13.582: INFO: >>> kubeConfig: /root/.kube/config Feb 23 10:47:13.588: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 23 10:47:13.616: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 23 10:47:13.664: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 23 10:47:13.664: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 23 10:47:13.664: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 23 10:47:13.677: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 23 10:47:13.677: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 23 10:47:13.677: INFO: e2e test version: v1.13.12 Feb 23 10:47:13.679: INFO: kube-apiserver version: v1.13.8 SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:47:13.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 23 10:47:14.037: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 10:47:14.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-kkb9h" to be "success or failure" Feb 23 10:47:14.321: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 235.246292ms Feb 23 10:47:16.335: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249973645s Feb 23 10:47:18.397: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31136693s Feb 23 10:47:20.409: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323133475s Feb 23 10:47:22.565: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479287011s Feb 23 10:47:24.601: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.515877204s STEP: Saw pod success Feb 23 10:47:24.602: INFO: Pod "downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:47:24.613: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 10:47:25.143: INFO: Waiting for pod downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008 to disappear Feb 23 10:47:25.331: INFO: Pod downwardapi-volume-da12c0f5-5629-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:47:25.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kkb9h" for this suite. Feb 23 10:47:31.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:47:31.592: INFO: namespace: e2e-tests-projected-kkb9h, resource: bindings, ignored listing per whitelist Feb 23 10:47:31.686: INFO: namespace e2e-tests-projected-kkb9h deletion completed in 6.33641234s • [SLOW TEST:18.007 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:47:31.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-rd2x STEP: Creating a pod to test atomic-volume-subpath Feb 23 10:47:32.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rd2x" in namespace "e2e-tests-subpath-cz6qw" to be "success or failure" Feb 23 10:47:32.056: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 21.947542ms Feb 23 10:47:34.444: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409619201s Feb 23 10:47:36.467: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432464731s Feb 23 10:47:38.624: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589496777s Feb 23 10:47:41.574: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 9.540311516s Feb 23 10:47:43.598: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.564052534s Feb 23 10:47:45.611: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.577145782s Feb 23 10:47:47.623: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.588935027s Feb 23 10:47:49.676: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.641555199s Feb 23 10:47:51.870: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Pending", Reason="", readiness=false. Elapsed: 19.8360346s Feb 23 10:47:53.968: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 21.933697222s Feb 23 10:47:55.986: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 23.952196202s Feb 23 10:47:57.999: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 25.964470419s Feb 23 10:48:00.012: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 27.978343923s Feb 23 10:48:02.031: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 29.996943548s Feb 23 10:48:04.061: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 32.027261536s Feb 23 10:48:06.082: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 34.048103416s Feb 23 10:48:08.095: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 36.061032695s Feb 23 10:48:10.126: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Running", Reason="", readiness=false. Elapsed: 38.09189511s Feb 23 10:48:12.248: INFO: Pod "pod-subpath-test-downwardapi-rd2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.213440412s STEP: Saw pod success Feb 23 10:48:12.248: INFO: Pod "pod-subpath-test-downwardapi-rd2x" satisfied condition "success or failure" Feb 23 10:48:12.252: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-rd2x container test-container-subpath-downwardapi-rd2x: STEP: delete the pod Feb 23 10:48:12.634: INFO: Waiting for pod pod-subpath-test-downwardapi-rd2x to disappear Feb 23 10:48:12.642: INFO: Pod pod-subpath-test-downwardapi-rd2x no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rd2x Feb 23 10:48:12.642: INFO: Deleting pod "pod-subpath-test-downwardapi-rd2x" in namespace "e2e-tests-subpath-cz6qw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:48:12.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-cz6qw" for this suite. Feb 23 10:48:20.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:48:20.879: INFO: namespace: e2e-tests-subpath-cz6qw, resource: bindings, ignored listing per whitelist Feb 23 10:48:20.960: INFO: namespace e2e-tests-subpath-cz6qw deletion completed in 8.290691319s • [SLOW TEST:49.273 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:48:20.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 10:48:21.220: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:48:31.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dfxws" for this suite. Feb 23 10:49:13.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:49:13.784: INFO: namespace: e2e-tests-pods-dfxws, resource: bindings, ignored listing per whitelist Feb 23 10:49:13.904: INFO: namespace e2e-tests-pods-dfxws deletion completed in 42.245802774s • [SLOW TEST:52.945 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:49:13.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:49:24.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zkqlr" for this suite. Feb 23 10:50:06.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:50:06.460: INFO: namespace: e2e-tests-kubelet-test-zkqlr, resource: bindings, ignored listing per whitelist Feb 23 10:50:06.798: INFO: namespace e2e-tests-kubelet-test-zkqlr deletion completed in 42.438751123s • [SLOW TEST:52.893 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:50:06.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 10:50:06.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4wh5c' Feb 23 10:50:08.909: INFO: stderr: "" Feb 23 10:50:08.909: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 23 10:50:08.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4wh5c' Feb 23 10:50:22.652: INFO: stderr: "" Feb 23 10:50:22.652: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:50:22.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4wh5c" for this suite. Feb 23 10:50:28.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:50:28.946: INFO: namespace: e2e-tests-kubectl-4wh5c, resource: bindings, ignored listing per whitelist Feb 23 10:50:28.963: INFO: namespace e2e-tests-kubectl-4wh5c deletion completed in 6.280988012s • [SLOW TEST:22.164 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:50:28.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 10:50:29.299: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4e61742b-562a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001953032), BlockOwnerDeletion:(*bool)(0xc001953033)}} Feb 23 10:50:29.436: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4e598efa-562a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000facd12), BlockOwnerDeletion:(*bool)(0xc000facd13)}} Feb 23 10:50:29.463: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4e5e586c-562a-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000a9ac7a), BlockOwnerDeletion:(*bool)(0xc000a9ac7b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:50:39.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-84k4l" for this suite. Feb 23 10:50:45.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:50:45.923: INFO: namespace: e2e-tests-gc-84k4l, resource: bindings, ignored listing per whitelist Feb 23 10:50:46.052: INFO: namespace e2e-tests-gc-84k4l deletion completed in 6.440114446s • [SLOW TEST:17.089 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:50:46.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 23 10:50:46.261: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 23 10:50:46.275: INFO: Waiting for terminating namespaces to be deleted... Feb 23 10:50:46.280: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 23 10:50:46.296: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 23 10:50:46.296: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 23 10:50:46.296: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 23 10:50:46.296: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 23 10:50:46.296: INFO: Container coredns ready: true, restart count 0 Feb 23 10:50:46.296: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 23 10:50:46.296: INFO: Container kube-proxy ready: true, restart count 0 Feb 23 10:50:46.296: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 23 10:50:46.296: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 23 10:50:46.296: INFO: Container weave ready: true, restart count 0 Feb 23 10:50:46.296: INFO: Container weave-npc ready: true, restart count 0 Feb 23 10:50:46.296: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 23 10:50:46.296: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f6027b0295f77d], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:50:47.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-k99lr" for this suite. Feb 23 10:50:55.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:50:55.902: INFO: namespace: e2e-tests-sched-pred-k99lr, resource: bindings, ignored listing per whitelist Feb 23 10:50:55.912: INFO: namespace e2e-tests-sched-pred-k99lr deletion completed in 8.477218405s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.861 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:50:55.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 10:50:56.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-lflhr" to be "success or failure" Feb 23 10:50:56.125: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 70.94722ms Feb 23 10:50:58.229: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174998192s Feb 23 10:51:00.245: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191164882s Feb 23 10:51:02.274: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219960129s Feb 23 10:51:04.288: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234610184s Feb 23 10:51:06.306: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.252382215s STEP: Saw pod success Feb 23 10:51:06.306: INFO: Pod "downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:51:06.312: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 10:51:06.376: INFO: Waiting for pod downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008 to disappear Feb 23 10:51:06.484: INFO: Pod downwardapi-volume-5e63f93a-562a-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:51:06.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lflhr" for this suite. Feb 23 10:51:12.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:51:12.822: INFO: namespace: e2e-tests-projected-lflhr, resource: bindings, ignored listing per whitelist Feb 23 10:51:12.967: INFO: namespace e2e-tests-projected-lflhr deletion completed in 6.469642262s • [SLOW TEST:17.053 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:51:12.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 23 10:51:13.138: INFO: Waiting up to 5m0s for pod "downward-api-689232ad-562a-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-xp98r" to be "success or failure" Feb 23 10:51:13.208: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 70.419744ms Feb 23 10:51:15.407: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269014098s Feb 23 10:51:17.459: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321544396s Feb 23 10:51:20.329: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.190724699s Feb 23 10:51:22.348: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.210352714s Feb 23 10:51:24.368: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.229634641s STEP: Saw pod success Feb 23 10:51:24.368: INFO: Pod "downward-api-689232ad-562a-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:51:24.376: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-689232ad-562a-11ea-8363-0242ac110008 container dapi-container: STEP: delete the pod Feb 23 10:51:24.568: INFO: Waiting for pod downward-api-689232ad-562a-11ea-8363-0242ac110008 to disappear Feb 23 10:51:24.578: INFO: Pod downward-api-689232ad-562a-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:51:24.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xp98r" for this suite. Feb 23 10:51:30.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:51:30.812: INFO: namespace: e2e-tests-downward-api-xp98r, resource: bindings, ignored listing per whitelist Feb 23 10:51:30.900: INFO: namespace e2e-tests-downward-api-xp98r deletion completed in 6.308494464s • [SLOW TEST:17.933 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:51:30.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 10:51:31.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ffdp' Feb 23 10:51:31.670: INFO: stderr: "" Feb 23 10:51:31.671: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 23 10:51:46.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ffdp -o json' Feb 23 10:51:46.884: INFO: stderr: "" Feb 23 10:51:46.884: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-23T10:51:31Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-9ffdp\",\n \"resourceVersion\": \"22632557\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-9ffdp/pods/e2e-test-nginx-pod\",\n \"uid\": \"739a43c9-562a-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-trrpx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-trrpx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-trrpx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-23T10:51:31Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-23T10:51:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-23T10:51:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-23T10:51:31Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://939923bfc2b090fcdde5f11cd7ddf5dddc913a72139dfc5a9515552371288a53\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-23T10:51:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-23T10:51:31Z\"\n }\n}\n" STEP: replace the image in the pod Feb 23 10:51:46.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-9ffdp' Feb 23 10:51:47.427: INFO: stderr: "" Feb 23 10:51:47.427: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 23 10:51:47.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9ffdp' Feb 23 10:51:55.738: INFO: stderr: "" Feb 23 10:51:55.738: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:51:55.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9ffdp" for this suite. Feb 23 10:52:02.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:52:02.348: INFO: namespace: e2e-tests-kubectl-9ffdp, resource: bindings, ignored listing per whitelist Feb 23 10:52:02.602: INFO: namespace e2e-tests-kubectl-9ffdp deletion completed in 6.588485329s • [SLOW TEST:31.701 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:52:02.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 10:52:02.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-fql5c" to be "success or failure" Feb 23 10:52:02.915: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.53216ms Feb 23 10:52:04.936: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026233506s Feb 23 10:52:06.964: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053851052s Feb 23 10:52:09.159: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248860665s Feb 23 10:52:11.171: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261453757s Feb 23 10:52:13.188: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.278029621s STEP: Saw pod success Feb 23 10:52:13.188: INFO: Pod "downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:52:13.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 10:52:13.984: INFO: Waiting for pod downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008 to disappear Feb 23 10:52:14.277: INFO: Pod downwardapi-volume-863bee1d-562a-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:52:14.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fql5c" for this suite. Feb 23 10:52:20.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:52:20.611: INFO: namespace: e2e-tests-projected-fql5c, resource: bindings, ignored listing per whitelist Feb 23 10:52:20.813: INFO: namespace e2e-tests-projected-fql5c deletion completed in 6.508505788s • [SLOW TEST:18.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:52:20.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 23 10:52:21.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632652,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 23 10:52:21.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632652,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 23 10:52:31.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632664,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 23 10:52:31.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632664,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 23 10:52:41.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632677,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 23 10:52:41.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632677,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 23 10:52:51.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632689,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 23 10:52:51.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-a,UID:910da157-562a-11ea-a994-fa163e34d433,ResourceVersion:22632689,Generation:0,CreationTimestamp:2020-02-23 10:52:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 23 10:53:01.143: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-b,UID:a8f183bf-562a-11ea-a994-fa163e34d433,ResourceVersion:22632702,Generation:0,CreationTimestamp:2020-02-23 10:53:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 23 10:53:01.143: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-b,UID:a8f183bf-562a-11ea-a994-fa163e34d433,ResourceVersion:22632702,Generation:0,CreationTimestamp:2020-02-23 10:53:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 23 10:53:11.164: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-b,UID:a8f183bf-562a-11ea-a994-fa163e34d433,ResourceVersion:22632715,Generation:0,CreationTimestamp:2020-02-23 10:53:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 23 10:53:11.164: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v8kbn,SelfLink:/api/v1/namespaces/e2e-tests-watch-v8kbn/configmaps/e2e-watch-test-configmap-b,UID:a8f183bf-562a-11ea-a994-fa163e34d433,ResourceVersion:22632715,Generation:0,CreationTimestamp:2020-02-23 10:53:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:53:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-v8kbn" for this suite. Feb 23 10:53:27.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:53:27.381: INFO: namespace: e2e-tests-watch-v8kbn, resource: bindings, ignored listing per whitelist Feb 23 10:53:27.475: INFO: namespace e2e-tests-watch-v8kbn deletion completed in 6.292639956s • [SLOW TEST:66.662 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:53:27.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:54:27.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m5ttl" for this suite. Feb 23 10:54:51.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:54:51.865: INFO: namespace: e2e-tests-container-probe-m5ttl, resource: bindings, ignored listing per whitelist Feb 23 10:54:51.983: INFO: namespace e2e-tests-container-probe-m5ttl deletion completed in 24.307905991s • [SLOW TEST:84.508 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:54:51.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:55:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9w6bp" for this suite. Feb 23 10:55:08.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:55:08.734: INFO: namespace: e2e-tests-emptydir-wrapper-9w6bp, resource: bindings, ignored listing per whitelist Feb 23 10:55:08.800: INFO: namespace e2e-tests-emptydir-wrapper-9w6bp deletion completed in 6.176407184s • [SLOW TEST:16.817 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:55:08.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 23 10:55:19.603: INFO: Successfully updated pod "pod-update-f52b78ce-562a-11ea-8363-0242ac110008" STEP: verifying the updated pod is in kubernetes Feb 23 10:55:19.639: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:55:19.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hq9xj" for this suite. Feb 23 10:55:33.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:55:33.819: INFO: namespace: e2e-tests-pods-hq9xj, resource: bindings, ignored listing per whitelist Feb 23 10:55:33.905: INFO: namespace e2e-tests-pods-hq9xj deletion completed in 14.257732568s • [SLOW TEST:25.104 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:55:33.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-042fb80a-562b-11ea-8363-0242ac110008 STEP: Creating a pod to test consume secrets Feb 23 10:55:34.232: INFO: Waiting up to 5m0s for pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-kc57g" to be "success or failure" Feb 23 10:55:34.565: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 332.824291ms Feb 23 10:55:36.605: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372542444s Feb 23 10:55:38.977: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.744376333s Feb 23 10:55:40.996: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.763786893s Feb 23 10:55:44.164: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.932061687s STEP: Saw pod success Feb 23 10:55:44.165: INFO: Pod "pod-secrets-0430c29d-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:55:44.203: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0430c29d-562b-11ea-8363-0242ac110008 container secret-env-test: STEP: delete the pod Feb 23 10:55:44.530: INFO: Waiting for pod pod-secrets-0430c29d-562b-11ea-8363-0242ac110008 to disappear Feb 23 10:55:44.545: INFO: Pod pod-secrets-0430c29d-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:55:44.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kc57g" for this suite. Feb 23 10:55:50.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:55:50.716: INFO: namespace: e2e-tests-secrets-kc57g, resource: bindings, ignored listing per whitelist Feb 23 10:55:51.039: INFO: namespace e2e-tests-secrets-kc57g deletion completed in 6.402215353s • [SLOW TEST:17.133 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:55:51.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jn5zf STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 23 10:55:51.402: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 23 10:56:25.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-jn5zf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 10:56:25.839: INFO: >>> kubeConfig: /root/.kube/config I0223 10:56:25.962974 8 log.go:172] (0xc000c56210) (0xc001ab05a0) Create stream I0223 10:56:25.963140 8 log.go:172] (0xc000c56210) (0xc001ab05a0) Stream added, broadcasting: 1 I0223 10:56:25.970874 8 log.go:172] (0xc000c56210) Reply frame received for 1 I0223 10:56:25.970931 8 log.go:172] (0xc000c56210) (0xc0018066e0) Create stream I0223 10:56:25.970939 8 log.go:172] (0xc000c56210) (0xc0018066e0) Stream added, broadcasting: 3 I0223 10:56:25.972488 8 log.go:172] (0xc000c56210) Reply frame received for 3 I0223 10:56:25.972527 8 log.go:172] (0xc000c56210) (0xc00189c0a0) Create stream I0223 10:56:25.972539 8 log.go:172] (0xc000c56210) (0xc00189c0a0) Stream added, broadcasting: 5 I0223 10:56:25.973549 8 log.go:172] (0xc000c56210) Reply frame received for 5 I0223 10:56:26.234164 8 log.go:172] (0xc000c56210) Data frame received for 3 I0223 10:56:26.234202 8 log.go:172] (0xc0018066e0) (3) Data frame handling I0223 10:56:26.234233 8 log.go:172] (0xc0018066e0) (3) Data frame sent I0223 10:56:26.406002 8 log.go:172] (0xc000c56210) (0xc0018066e0) Stream removed, broadcasting: 3 I0223 10:56:26.406177 8 log.go:172] (0xc000c56210) Data frame received for 1 I0223 10:56:26.406236 8 log.go:172] (0xc000c56210) (0xc00189c0a0) Stream removed, broadcasting: 5 I0223 10:56:26.406298 8 log.go:172] (0xc001ab05a0) (1) Data frame handling I0223 10:56:26.406345 8 log.go:172] (0xc001ab05a0) (1) Data frame sent I0223 10:56:26.406361 8 log.go:172] (0xc000c56210) (0xc001ab05a0) Stream removed, broadcasting: 1 I0223 10:56:26.406386 8 log.go:172] (0xc000c56210) Go away received I0223 10:56:26.407114 8 log.go:172] (0xc000c56210) (0xc001ab05a0) Stream removed, broadcasting: 1 I0223 10:56:26.407139 8 log.go:172] (0xc000c56210) (0xc0018066e0) Stream removed, broadcasting: 3 I0223 10:56:26.407157 8 log.go:172] (0xc000c56210) (0xc00189c0a0) Stream removed, broadcasting: 5 Feb 23 10:56:26.407: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:56:26.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-jn5zf" for this suite. Feb 23 10:56:50.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:56:50.563: INFO: namespace: e2e-tests-pod-network-test-jn5zf, resource: bindings, ignored listing per whitelist Feb 23 10:56:50.666: INFO: namespace e2e-tests-pod-network-test-jn5zf deletion completed in 24.238597543s • [SLOW TEST:59.625 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:56:50.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 10:56:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:56:51.004: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 23 10:56:51.004: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 23 10:56:51.014: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 23 10:56:51.131: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 23 10:56:51.171: INFO: scanned /root for discovery docs: Feb 23 10:56:51.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:57:18.240: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 23 10:57:18.240: INFO: stdout: "Created e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc\nScaling up e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 23 10:57:18.240: INFO: stdout: "Created e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc\nScaling up e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 23 10:57:18.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:57:18.410: INFO: stderr: "" Feb 23 10:57:18.410: INFO: stdout: "e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc-ljhxw " Feb 23 10:57:18.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc-ljhxw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:57:18.603: INFO: stderr: "" Feb 23 10:57:18.603: INFO: stdout: "true" Feb 23 10:57:18.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc-ljhxw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:57:18.750: INFO: stderr: "" Feb 23 10:57:18.750: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 23 10:57:18.750: INFO: e2e-test-nginx-rc-5e43d383bdc304f456a224da6a0b5ebc-ljhxw is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 23 10:57:18.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zlrtz' Feb 23 10:57:18.909: INFO: stderr: "" Feb 23 10:57:18.909: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:57:18.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zlrtz" for this suite. Feb 23 10:57:42.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:57:43.206: INFO: namespace: e2e-tests-kubectl-zlrtz, resource: bindings, ignored listing per whitelist Feb 23 10:57:43.216: INFO: namespace e2e-tests-kubectl-zlrtz deletion completed in 24.297878537s • [SLOW TEST:52.550 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:57:43.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0223 10:58:14.235835 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 23 10:58:14.235: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:58:14.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lc5sp" for this suite. Feb 23 10:58:24.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:58:24.956: INFO: namespace: e2e-tests-gc-lc5sp, resource: bindings, ignored listing per whitelist Feb 23 10:58:24.974: INFO: namespace e2e-tests-gc-lc5sp deletion completed in 10.73209607s • [SLOW TEST:41.758 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:58:24.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zrb5l STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zrb5l to expose endpoints map[] Feb 23 10:58:26.105: INFO: Get endpoints failed (14.825657ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 23 10:58:27.125: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zrb5l exposes endpoints map[] (1.03464957s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zrb5l STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zrb5l to expose endpoints map[pod1:[100]] Feb 23 10:58:31.977: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.829946881s elapsed, will retry) Feb 23 10:58:36.093: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zrb5l exposes endpoints map[pod1:[100]] (8.945150884s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zrb5l STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zrb5l to expose endpoints map[pod1:[100] pod2:[101]] Feb 23 10:58:40.431: INFO: Unexpected endpoints: found map[6b43b107-562b-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.322274408s elapsed, will retry) Feb 23 10:58:44.798: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zrb5l exposes endpoints map[pod1:[100] pod2:[101]] (8.689477158s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zrb5l STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zrb5l to expose endpoints map[pod2:[101]] Feb 23 10:58:44.916: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zrb5l exposes endpoints map[pod2:[101]] (66.616032ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zrb5l STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zrb5l to expose endpoints map[] Feb 23 10:58:45.063: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zrb5l exposes endpoints map[] (90.087818ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:58:45.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zrb5l" for this suite. Feb 23 10:59:09.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:59:09.467: INFO: namespace: e2e-tests-services-zrb5l, resource: bindings, ignored listing per whitelist Feb 23 10:59:09.477: INFO: namespace e2e-tests-services-zrb5l deletion completed in 24.270024378s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:44.502 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:59:09.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 10:59:09.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-s4dbn" to be "success or failure" Feb 23 10:59:09.746: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.430938ms Feb 23 10:59:11.761: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026742333s Feb 23 10:59:13.794: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059684156s Feb 23 10:59:16.179: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443865477s Feb 23 10:59:18.277: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541988384s Feb 23 10:59:20.951: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.21656925s STEP: Saw pod success Feb 23 10:59:20.951: INFO: Pod "downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:59:20.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 10:59:21.773: INFO: Waiting for pod downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008 to disappear Feb 23 10:59:21.801: INFO: Pod downwardapi-volume-84a47548-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:59:21.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s4dbn" for this suite. Feb 23 10:59:27.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:59:27.947: INFO: namespace: e2e-tests-projected-s4dbn, resource: bindings, ignored listing per whitelist Feb 23 10:59:28.131: INFO: namespace e2e-tests-projected-s4dbn deletion completed in 6.317345679s • [SLOW TEST:18.654 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:59:28.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 23 10:59:28.390: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t8sg4,SelfLink:/api/v1/namespaces/e2e-tests-watch-t8sg4/configmaps/e2e-watch-test-watch-closed,UID:8fc33d4b-562b-11ea-a994-fa163e34d433,ResourceVersion:22633544,Generation:0,CreationTimestamp:2020-02-23 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 23 10:59:28.390: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t8sg4,SelfLink:/api/v1/namespaces/e2e-tests-watch-t8sg4/configmaps/e2e-watch-test-watch-closed,UID:8fc33d4b-562b-11ea-a994-fa163e34d433,ResourceVersion:22633545,Generation:0,CreationTimestamp:2020-02-23 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 23 10:59:28.565: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t8sg4,SelfLink:/api/v1/namespaces/e2e-tests-watch-t8sg4/configmaps/e2e-watch-test-watch-closed,UID:8fc33d4b-562b-11ea-a994-fa163e34d433,ResourceVersion:22633546,Generation:0,CreationTimestamp:2020-02-23 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 23 10:59:28.565: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t8sg4,SelfLink:/api/v1/namespaces/e2e-tests-watch-t8sg4/configmaps/e2e-watch-test-watch-closed,UID:8fc33d4b-562b-11ea-a994-fa163e34d433,ResourceVersion:22633547,Generation:0,CreationTimestamp:2020-02-23 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:59:28.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-t8sg4" for this suite. Feb 23 10:59:34.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:59:34.673: INFO: namespace: e2e-tests-watch-t8sg4, resource: bindings, ignored listing per whitelist Feb 23 10:59:34.723: INFO: namespace e2e-tests-watch-t8sg4 deletion completed in 6.147020588s • [SLOW TEST:6.592 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:59:34.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 23 10:59:34.910: INFO: Waiting up to 5m0s for pod "pod-93a056e9-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-bgdkd" to be "success or failure" Feb 23 10:59:34.924: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.485541ms Feb 23 10:59:36.957: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046370634s Feb 23 10:59:38.971: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060440932s Feb 23 10:59:40.986: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075596877s Feb 23 10:59:43.399: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488510582s Feb 23 10:59:45.415: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504637935s STEP: Saw pod success Feb 23 10:59:45.415: INFO: Pod "pod-93a056e9-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 10:59:45.419: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-93a056e9-562b-11ea-8363-0242ac110008 container test-container: STEP: delete the pod Feb 23 10:59:45.639: INFO: Waiting for pod pod-93a056e9-562b-11ea-8363-0242ac110008 to disappear Feb 23 10:59:45.677: INFO: Pod pod-93a056e9-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:59:45.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bgdkd" for this suite. Feb 23 10:59:51.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:59:52.040: INFO: namespace: e2e-tests-emptydir-bgdkd, resource: bindings, ignored listing per whitelist Feb 23 10:59:52.040: INFO: namespace e2e-tests-emptydir-bgdkd deletion completed in 6.35177331s • [SLOW TEST:17.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:59:52.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 10:59:52.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-6qxtw" for this suite. Feb 23 10:59:58.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 10:59:58.445: INFO: namespace: e2e-tests-services-6qxtw, resource: bindings, ignored listing per whitelist Feb 23 10:59:58.457: INFO: namespace e2e-tests-services-6qxtw deletion completed in 6.172232678s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.415 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 10:59:58.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 10:59:58.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-6sbhb" to be "success or failure" Feb 23 10:59:58.800: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.861043ms Feb 23 11:00:01.241: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478265944s Feb 23 11:00:03.255: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492260708s Feb 23 11:00:05.267: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50378683s Feb 23 11:00:07.660: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.897004704s Feb 23 11:00:09.675: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.912070261s STEP: Saw pod success Feb 23 11:00:09.675: INFO: Pod "downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:00:09.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 11:00:09.968: INFO: Waiting for pod downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008 to disappear Feb 23 11:00:10.111: INFO: Pod downwardapi-volume-a1dd4f65-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:00:10.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6sbhb" for this suite. Feb 23 11:00:16.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:00:16.196: INFO: namespace: e2e-tests-downward-api-6sbhb, resource: bindings, ignored listing per whitelist Feb 23 11:00:16.283: INFO: namespace e2e-tests-downward-api-6sbhb deletion completed in 6.159795252s • [SLOW TEST:17.826 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:00:16.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 23 11:00:25.399: INFO: Successfully updated pod "annotationupdateac867525-562b-11ea-8363-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:00:27.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w5fv9" for this suite. Feb 23 11:00:53.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:00:53.785: INFO: namespace: e2e-tests-projected-w5fv9, resource: bindings, ignored listing per whitelist Feb 23 11:00:53.810: INFO: namespace e2e-tests-projected-w5fv9 deletion completed in 26.316765409s • [SLOW TEST:37.526 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:00:53.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c2d95ee7-562b-11ea-8363-0242ac110008 STEP: Creating a pod to test consume secrets Feb 23 11:00:54.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-x578v" to be "success or failure" Feb 23 11:00:54.227: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 105.11403ms Feb 23 11:00:56.502: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379975028s Feb 23 11:00:58.529: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406267009s Feb 23 11:01:00.568: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445902896s Feb 23 11:01:02.929: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807089424s Feb 23 11:01:04.958: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.835707512s STEP: Saw pod success Feb 23 11:01:04.958: INFO: Pod "pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:01:04.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 23 11:01:05.316: INFO: Waiting for pod pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008 to disappear Feb 23 11:01:05.326: INFO: Pod pod-projected-secrets-c2dac572-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:01:05.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x578v" for this suite. Feb 23 11:01:11.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:01:11.756: INFO: namespace: e2e-tests-projected-x578v, resource: bindings, ignored listing per whitelist Feb 23 11:01:11.764: INFO: namespace e2e-tests-projected-x578v deletion completed in 6.430712772s • [SLOW TEST:17.953 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:01:11.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-jv76 STEP: Creating a pod to test atomic-volume-subpath Feb 23 11:01:12.009: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jv76" in namespace "e2e-tests-subpath-d96xn" to be "success or failure" Feb 23 11:01:12.017: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 7.903986ms Feb 23 11:01:14.046: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036665511s Feb 23 11:01:16.068: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058345724s Feb 23 11:01:18.162: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152775631s Feb 23 11:01:20.209: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200053679s Feb 23 11:01:22.232: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.223152418s Feb 23 11:01:24.246: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 12.236882563s Feb 23 11:01:26.258: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 14.249040351s Feb 23 11:01:28.271: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Pending", Reason="", readiness=false. Elapsed: 16.261447663s Feb 23 11:01:30.290: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 18.280843026s Feb 23 11:01:32.318: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 20.308325676s Feb 23 11:01:34.336: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 22.326492971s Feb 23 11:01:36.381: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 24.371404747s Feb 23 11:01:38.396: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 26.386944384s Feb 23 11:01:40.412: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 28.402628243s Feb 23 11:01:42.432: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 30.423115701s Feb 23 11:01:44.449: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 32.439277685s Feb 23 11:01:46.512: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Running", Reason="", readiness=false. Elapsed: 34.502221766s Feb 23 11:01:48.759: INFO: Pod "pod-subpath-test-secret-jv76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.74978079s STEP: Saw pod success Feb 23 11:01:48.759: INFO: Pod "pod-subpath-test-secret-jv76" satisfied condition "success or failure" Feb 23 11:01:48.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-jv76 container test-container-subpath-secret-jv76: STEP: delete the pod Feb 23 11:01:49.084: INFO: Waiting for pod pod-subpath-test-secret-jv76 to disappear Feb 23 11:01:49.098: INFO: Pod pod-subpath-test-secret-jv76 no longer exists STEP: Deleting pod pod-subpath-test-secret-jv76 Feb 23 11:01:49.098: INFO: Deleting pod "pod-subpath-test-secret-jv76" in namespace "e2e-tests-subpath-d96xn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:01:49.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-d96xn" for this suite. Feb 23 11:01:57.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:01:57.445: INFO: namespace: e2e-tests-subpath-d96xn, resource: bindings, ignored listing per whitelist Feb 23 11:01:57.489: INFO: namespace e2e-tests-subpath-d96xn deletion completed in 8.370690589s • [SLOW TEST:45.725 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:01:57.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 23 11:01:57.806: INFO: Waiting up to 5m0s for pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-6f5r9" to be "success or failure" Feb 23 11:01:57.812: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.7261ms Feb 23 11:01:59.840: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033567362s Feb 23 11:02:02.112: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305773677s Feb 23 11:02:04.510: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703921265s Feb 23 11:02:06.560: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753434034s Feb 23 11:02:09.604: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.797102545s Feb 23 11:02:11.619: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.812693516s STEP: Saw pod success Feb 23 11:02:11.619: INFO: Pod "pod-e8ce5dbe-562b-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:02:11.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e8ce5dbe-562b-11ea-8363-0242ac110008 container test-container: STEP: delete the pod Feb 23 11:02:11.977: INFO: Waiting for pod pod-e8ce5dbe-562b-11ea-8363-0242ac110008 to disappear Feb 23 11:02:11.991: INFO: Pod pod-e8ce5dbe-562b-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:02:11.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6f5r9" for this suite. Feb 23 11:02:18.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:02:18.287: INFO: namespace: e2e-tests-emptydir-6f5r9, resource: bindings, ignored listing per whitelist Feb 23 11:02:18.302: INFO: namespace e2e-tests-emptydir-6f5r9 deletion completed in 6.303002398s • [SLOW TEST:20.812 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:02:18.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 11:02:18.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-ms6g8' Feb 23 11:02:20.549: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 23 11:02:20.549: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 23 11:02:24.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ms6g8' Feb 23 11:02:25.127: INFO: stderr: "" Feb 23 11:02:25.127: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:02:25.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ms6g8" for this suite. Feb 23 11:02:31.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:02:31.479: INFO: namespace: e2e-tests-kubectl-ms6g8, resource: bindings, ignored listing per whitelist Feb 23 11:02:31.525: INFO: namespace e2e-tests-kubectl-ms6g8 deletion completed in 6.296799532s • [SLOW TEST:13.223 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:02:31.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 23 11:02:32.235: INFO: created pod pod-service-account-defaultsa Feb 23 11:02:32.235: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 23 11:02:32.265: INFO: created pod pod-service-account-mountsa Feb 23 11:02:32.265: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 23 11:02:32.277: INFO: created pod pod-service-account-nomountsa Feb 23 11:02:32.277: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 23 11:02:32.415: INFO: created pod pod-service-account-defaultsa-mountspec Feb 23 11:02:32.415: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 23 11:02:32.458: INFO: created pod pod-service-account-mountsa-mountspec Feb 23 11:02:32.458: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 23 11:02:32.647: INFO: created pod pod-service-account-nomountsa-mountspec Feb 23 11:02:32.647: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 23 11:02:33.074: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 23 11:02:33.074: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 23 11:02:33.985: INFO: created pod pod-service-account-mountsa-nomountspec Feb 23 11:02:33.985: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 23 11:02:34.722: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 23 11:02:34.722: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:02:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-jjxwf" for this suite. Feb 23 11:03:06.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:03:06.846: INFO: namespace: e2e-tests-svcaccounts-jjxwf, resource: bindings, ignored listing per whitelist Feb 23 11:03:06.863: INFO: namespace e2e-tests-svcaccounts-jjxwf deletion completed in 31.731675063s • [SLOW TEST:35.338 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:03:06.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:03:37.254: INFO: Container started at 2020-02-23 11:03:16 +0000 UTC, pod became ready at 2020-02-23 11:03:35 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:03:37.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-k6lpl" for this suite. Feb 23 11:04:01.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:04:01.425: INFO: namespace: e2e-tests-container-probe-k6lpl, resource: bindings, ignored listing per whitelist Feb 23 11:04:01.487: INFO: namespace e2e-tests-container-probe-k6lpl deletion completed in 24.224129398s • [SLOW TEST:54.624 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:04:01.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-697qv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-697qv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 23 11:04:18.009: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.017: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.035: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.047: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.056: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.061: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.067: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.072: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.076: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.080: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-32aae033-562c-11ea-8363-0242ac110008) Feb 23 11:04:18.080: INFO: Lookups using e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-697qv.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 23 11:04:23.227: INFO: DNS probes using e2e-tests-dns-697qv/dns-test-32aae033-562c-11ea-8363-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:04:23.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-697qv" for this suite. Feb 23 11:04:31.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:04:31.741: INFO: namespace: e2e-tests-dns-697qv, resource: bindings, ignored listing per whitelist Feb 23 11:04:31.775: INFO: namespace e2e-tests-dns-697qv deletion completed in 8.297946859s • [SLOW TEST:30.287 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:04:31.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 23 11:04:31.962: INFO: PodSpec: initContainers in spec.initContainers Feb 23 11:05:45.434: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-44b72e48-562c-11ea-8363-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-7645p", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-7645p/pods/pod-init-44b72e48-562c-11ea-8363-0242ac110008", UID:"44b801a0-562c-11ea-a994-fa163e34d433", ResourceVersion:"22634382", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718052671, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"962689793"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vsc7n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00192a8c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsc7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsc7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsc7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fd2948), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020b4060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000fd2a40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000fd2e40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000fd2e48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000fd2e4c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052672, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052672, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052672, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052671, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001344140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000504380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000504540)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://89a888f8c7775b81e4155128cd241070b8294490f58bd6c5c02721659dfdbc8f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001344180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001344160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:05:45.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7645p" for this suite. Feb 23 11:06:09.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:06:09.933: INFO: namespace: e2e-tests-init-container-7645p, resource: bindings, ignored listing per whitelist Feb 23 11:06:09.941: INFO: namespace e2e-tests-init-container-7645p deletion completed in 24.464630071s • [SLOW TEST:98.165 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:06:09.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 23 11:06:20.388: INFO: Pod pod-hostip-7f3d78f6-562c-11ea-8363-0242ac110008 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:06:20.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-46swn" for this suite. Feb 23 11:06:44.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:06:44.547: INFO: namespace: e2e-tests-pods-46swn, resource: bindings, ignored listing per whitelist Feb 23 11:06:44.723: INFO: namespace e2e-tests-pods-46swn deletion completed in 24.324536542s • [SLOW TEST:34.782 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:06:44.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:06:44.925: INFO: Creating deployment "test-recreate-deployment" Feb 23 11:06:44.976: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 23 11:06:44.984: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 23 11:06:47.553: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 23 11:06:47.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052804, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:06:49.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052804, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:06:51.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052804, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:06:53.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052805, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718052804, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:06:55.641: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 23 11:06:55.662: INFO: Updating deployment test-recreate-deployment Feb 23 11:06:55.662: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 23 11:06:56.611: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-vbs8f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vbs8f/deployments/test-recreate-deployment,UID:93f7fb69-562c-11ea-a994-fa163e34d433,ResourceVersion:22634550,Generation:2,CreationTimestamp:2020-02-23 11:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-23 11:06:56 +0000 UTC 2020-02-23 11:06:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-23 11:06:56 +0000 UTC 2020-02-23 11:06:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 23 11:06:56.625: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-vbs8f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vbs8f/replicasets/test-recreate-deployment-589c4bfd,UID:9aa94fb5-562c-11ea-a994-fa163e34d433,ResourceVersion:22634548,Generation:1,CreationTimestamp:2020-02-23 11:06:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 93f7fb69-562c-11ea-a994-fa163e34d433 0xc00165615f 0xc001656420}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:06:56.625: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 23 11:06:56.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-vbs8f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vbs8f/replicasets/test-recreate-deployment-5bf7f65dc,UID:94016b11-562c-11ea-a994-fa163e34d433,ResourceVersion:22634539,Generation:2,CreationTimestamp:2020-02-23 11:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 93f7fb69-562c-11ea-a994-fa163e34d433 0xc0016564e0 0xc0016564e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:06:56.749: INFO: Pod "test-recreate-deployment-589c4bfd-xmq84" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-xmq84,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-vbs8f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vbs8f/pods/test-recreate-deployment-589c4bfd-xmq84,UID:9ac7d742-562c-11ea-a994-fa163e34d433,ResourceVersion:22634545,Generation:0,CreationTimestamp:2020-02-23 11:06:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 9aa94fb5-562c-11ea-a994-fa163e34d433 0xc0019b8ddf 0xc0019b8df0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dddt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dddt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4dddt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019b8e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019b8e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:06:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:06:56.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vbs8f" for this suite. Feb 23 11:07:04.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:07:05.109: INFO: namespace: e2e-tests-deployment-vbs8f, resource: bindings, ignored listing per whitelist Feb 23 11:07:05.113: INFO: namespace e2e-tests-deployment-vbs8f deletion completed in 8.310287624s • [SLOW TEST:20.390 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:07:05.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 23 11:07:27.213: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:27.232: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:29.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:29.243: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:31.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:31.289: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:33.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:33.240: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:35.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:35.275: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:37.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:37.261: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:39.234: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:39.250: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:41.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:41.254: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:43.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:50.577: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:51.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:51.298: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:53.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:53.293: INFO: Pod pod-with-prestop-exec-hook still exists Feb 23 11:07:55.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 23 11:07:55.287: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:07:55.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-86tst" for this suite. Feb 23 11:08:19.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:08:19.463: INFO: namespace: e2e-tests-container-lifecycle-hook-86tst, resource: bindings, ignored listing per whitelist Feb 23 11:08:19.579: INFO: namespace e2e-tests-container-lifecycle-hook-86tst deletion completed in 24.232954881s • [SLOW TEST:74.466 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:08:19.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-p84jb I0223 11:08:19.949673 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-p84jb, replica count: 1 I0223 11:08:21.000575 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:22.001609 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:23.002065 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:24.002728 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:25.003254 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:26.003845 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:27.004355 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:28.004695 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:29.005021 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:30.005350 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 11:08:31.005666 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 23 11:08:31.180: INFO: Created: latency-svc-6zxhs Feb 23 11:08:31.276: INFO: Got endpoints: latency-svc-6zxhs [170.982336ms] Feb 23 11:08:31.388: INFO: Created: latency-svc-lqdgf Feb 23 11:08:31.519: INFO: Got endpoints: latency-svc-lqdgf [240.624831ms] Feb 23 11:08:31.541: INFO: Created: latency-svc-l476m Feb 23 11:08:31.562: INFO: Got endpoints: latency-svc-l476m [283.819605ms] Feb 23 11:08:31.719: INFO: Created: latency-svc-lp7ms Feb 23 11:08:31.774: INFO: Got endpoints: latency-svc-lp7ms [494.581542ms] Feb 23 11:08:31.778: INFO: Created: latency-svc-nq2zg Feb 23 11:08:31.922: INFO: Got endpoints: latency-svc-nq2zg [643.799239ms] Feb 23 11:08:31.945: INFO: Created: latency-svc-mxjgv Feb 23 11:08:31.955: INFO: Got endpoints: latency-svc-mxjgv [676.124288ms] Feb 23 11:08:32.009: INFO: Created: latency-svc-qs4g4 Feb 23 11:08:32.248: INFO: Got endpoints: latency-svc-qs4g4 [968.260686ms] Feb 23 11:08:32.294: INFO: Created: latency-svc-jbrs2 Feb 23 11:08:32.329: INFO: Got endpoints: latency-svc-jbrs2 [1.051044186s] Feb 23 11:08:32.478: INFO: Created: latency-svc-h9xms Feb 23 11:08:32.497: INFO: Got endpoints: latency-svc-h9xms [1.217570375s] Feb 23 11:08:32.569: INFO: Created: latency-svc-k7sbh Feb 23 11:08:32.709: INFO: Got endpoints: latency-svc-k7sbh [1.432642646s] Feb 23 11:08:32.737: INFO: Created: latency-svc-xl8k4 Feb 23 11:08:32.749: INFO: Got endpoints: latency-svc-xl8k4 [1.470211576s] Feb 23 11:08:32.921: INFO: Created: latency-svc-8mcph Feb 23 11:08:32.949: INFO: Got endpoints: latency-svc-8mcph [1.669227034s] Feb 23 11:08:33.102: INFO: Created: latency-svc-fz8wq Feb 23 11:08:33.122: INFO: Got endpoints: latency-svc-fz8wq [1.843253986s] Feb 23 11:08:33.306: INFO: Created: latency-svc-g5s69 Feb 23 11:08:33.327: INFO: Got endpoints: latency-svc-g5s69 [2.048945119s] Feb 23 11:08:33.368: INFO: Created: latency-svc-2zlj2 Feb 23 11:08:33.384: INFO: Got endpoints: latency-svc-2zlj2 [2.105992626s] Feb 23 11:08:33.530: INFO: Created: latency-svc-2g7w7 Feb 23 11:08:33.542: INFO: Got endpoints: latency-svc-2g7w7 [2.262682057s] Feb 23 11:08:33.731: INFO: Created: latency-svc-cx4pc Feb 23 11:08:33.775: INFO: Got endpoints: latency-svc-cx4pc [2.255235084s] Feb 23 11:08:33.831: INFO: Created: latency-svc-zlnxj Feb 23 11:08:33.968: INFO: Created: latency-svc-xj7bw Feb 23 11:08:33.979: INFO: Got endpoints: latency-svc-zlnxj [2.416009189s] Feb 23 11:08:34.006: INFO: Got endpoints: latency-svc-xj7bw [2.231946985s] Feb 23 11:08:34.161: INFO: Created: latency-svc-8dtnk Feb 23 11:08:34.165: INFO: Got endpoints: latency-svc-8dtnk [2.242397525s] Feb 23 11:08:34.256: INFO: Created: latency-svc-zj9kh Feb 23 11:08:34.381: INFO: Got endpoints: latency-svc-zj9kh [2.424956127s] Feb 23 11:08:34.393: INFO: Created: latency-svc-qqrmj Feb 23 11:08:34.411: INFO: Got endpoints: latency-svc-qqrmj [2.162884906s] Feb 23 11:08:34.440: INFO: Created: latency-svc-hsmhf Feb 23 11:08:34.456: INFO: Got endpoints: latency-svc-hsmhf [2.126729973s] Feb 23 11:08:34.615: INFO: Created: latency-svc-dvq9p Feb 23 11:08:34.625: INFO: Got endpoints: latency-svc-dvq9p [2.127779634s] Feb 23 11:08:34.799: INFO: Created: latency-svc-82g6k Feb 23 11:08:34.819: INFO: Got endpoints: latency-svc-82g6k [2.108923277s] Feb 23 11:08:34.897: INFO: Created: latency-svc-9jlt9 Feb 23 11:08:35.029: INFO: Got endpoints: latency-svc-9jlt9 [2.279647192s] Feb 23 11:08:35.058: INFO: Created: latency-svc-fdnvc Feb 23 11:08:35.074: INFO: Got endpoints: latency-svc-fdnvc [2.124781246s] Feb 23 11:08:35.289: INFO: Created: latency-svc-dnxc8 Feb 23 11:08:35.309: INFO: Got endpoints: latency-svc-dnxc8 [2.187451492s] Feb 23 11:08:35.370: INFO: Created: latency-svc-c7kfv Feb 23 11:08:35.505: INFO: Got endpoints: latency-svc-c7kfv [2.177701409s] Feb 23 11:08:35.551: INFO: Created: latency-svc-6g6z4 Feb 23 11:08:35.566: INFO: Got endpoints: latency-svc-6g6z4 [2.181996834s] Feb 23 11:08:35.838: INFO: Created: latency-svc-cps8h Feb 23 11:08:35.865: INFO: Got endpoints: latency-svc-cps8h [2.321900064s] Feb 23 11:08:36.059: INFO: Created: latency-svc-vtp9z Feb 23 11:08:36.081: INFO: Got endpoints: latency-svc-vtp9z [2.306606161s] Feb 23 11:08:36.142: INFO: Created: latency-svc-lxjhh Feb 23 11:08:36.321: INFO: Got endpoints: latency-svc-lxjhh [2.341981809s] Feb 23 11:08:36.343: INFO: Created: latency-svc-7t4ct Feb 23 11:08:36.378: INFO: Got endpoints: latency-svc-7t4ct [2.372510243s] Feb 23 11:08:36.680: INFO: Created: latency-svc-29gzr Feb 23 11:08:36.827: INFO: Got endpoints: latency-svc-29gzr [2.662002055s] Feb 23 11:08:36.833: INFO: Created: latency-svc-hllzf Feb 23 11:08:36.843: INFO: Got endpoints: latency-svc-hllzf [2.462428528s] Feb 23 11:08:37.028: INFO: Created: latency-svc-dt999 Feb 23 11:08:37.071: INFO: Got endpoints: latency-svc-dt999 [2.660103782s] Feb 23 11:08:37.116: INFO: Created: latency-svc-hk2tk Feb 23 11:08:37.278: INFO: Got endpoints: latency-svc-hk2tk [2.822089213s] Feb 23 11:08:37.324: INFO: Created: latency-svc-sszbb Feb 23 11:08:37.355: INFO: Got endpoints: latency-svc-sszbb [2.730596921s] Feb 23 11:08:37.515: INFO: Created: latency-svc-vghnv Feb 23 11:08:37.541: INFO: Got endpoints: latency-svc-vghnv [2.72187067s] Feb 23 11:08:37.581: INFO: Created: latency-svc-z96dr Feb 23 11:08:37.742: INFO: Got endpoints: latency-svc-z96dr [2.71247383s] Feb 23 11:08:37.767: INFO: Created: latency-svc-vtdg7 Feb 23 11:08:37.800: INFO: Got endpoints: latency-svc-vtdg7 [2.72529884s] Feb 23 11:08:37.944: INFO: Created: latency-svc-pvwcx Feb 23 11:08:37.970: INFO: Got endpoints: latency-svc-pvwcx [2.660125868s] Feb 23 11:08:38.152: INFO: Created: latency-svc-2msmx Feb 23 11:08:38.169: INFO: Got endpoints: latency-svc-2msmx [2.663782888s] Feb 23 11:08:38.348: INFO: Created: latency-svc-zntlz Feb 23 11:08:38.377: INFO: Got endpoints: latency-svc-zntlz [2.810342753s] Feb 23 11:08:38.418: INFO: Created: latency-svc-27gmn Feb 23 11:08:38.588: INFO: Got endpoints: latency-svc-27gmn [2.722994041s] Feb 23 11:08:38.631: INFO: Created: latency-svc-xfgnw Feb 23 11:08:38.691: INFO: Got endpoints: latency-svc-xfgnw [2.609140367s] Feb 23 11:08:38.823: INFO: Created: latency-svc-gr8v8 Feb 23 11:08:38.879: INFO: Got endpoints: latency-svc-gr8v8 [2.55843272s] Feb 23 11:08:39.008: INFO: Created: latency-svc-295px Feb 23 11:08:39.032: INFO: Got endpoints: latency-svc-295px [2.653187263s] Feb 23 11:08:39.225: INFO: Created: latency-svc-xs6dn Feb 23 11:08:39.237: INFO: Got endpoints: latency-svc-xs6dn [2.409165206s] Feb 23 11:08:39.411: INFO: Created: latency-svc-qzfwb Feb 23 11:08:39.584: INFO: Got endpoints: latency-svc-qzfwb [2.740844561s] Feb 23 11:08:39.604: INFO: Created: latency-svc-b8ll7 Feb 23 11:08:39.608: INFO: Got endpoints: latency-svc-b8ll7 [2.536914761s] Feb 23 11:08:39.835: INFO: Created: latency-svc-8bz2z Feb 23 11:08:39.844: INFO: Got endpoints: latency-svc-8bz2z [2.565054306s] Feb 23 11:08:39.879: INFO: Created: latency-svc-5h4ft Feb 23 11:08:39.908: INFO: Got endpoints: latency-svc-5h4ft [2.55211201s] Feb 23 11:08:40.034: INFO: Created: latency-svc-wm6kz Feb 23 11:08:40.046: INFO: Got endpoints: latency-svc-wm6kz [2.505138122s] Feb 23 11:08:40.223: INFO: Created: latency-svc-lm48p Feb 23 11:08:40.240: INFO: Got endpoints: latency-svc-lm48p [2.497822923s] Feb 23 11:08:40.495: INFO: Created: latency-svc-p5cfp Feb 23 11:08:40.510: INFO: Got endpoints: latency-svc-p5cfp [2.709852154s] Feb 23 11:08:40.641: INFO: Created: latency-svc-t8dwm Feb 23 11:08:40.641: INFO: Got endpoints: latency-svc-t8dwm [2.671314753s] Feb 23 11:08:40.820: INFO: Created: latency-svc-8ck46 Feb 23 11:08:40.834: INFO: Got endpoints: latency-svc-8ck46 [2.664816347s] Feb 23 11:08:40.877: INFO: Created: latency-svc-tnl4g Feb 23 11:08:40.907: INFO: Got endpoints: latency-svc-tnl4g [2.530349002s] Feb 23 11:08:40.988: INFO: Created: latency-svc-69ln4 Feb 23 11:08:40.994: INFO: Got endpoints: latency-svc-69ln4 [2.406065995s] Feb 23 11:08:41.055: INFO: Created: latency-svc-4tbjw Feb 23 11:08:41.064: INFO: Got endpoints: latency-svc-4tbjw [2.372396308s] Feb 23 11:08:41.157: INFO: Created: latency-svc-qfl42 Feb 23 11:08:41.176: INFO: Got endpoints: latency-svc-qfl42 [2.295869485s] Feb 23 11:08:41.336: INFO: Created: latency-svc-dwhk6 Feb 23 11:08:41.405: INFO: Created: latency-svc-zk9s8 Feb 23 11:08:41.552: INFO: Created: latency-svc-zrzgw Feb 23 11:08:41.556: INFO: Got endpoints: latency-svc-zrzgw [1.971626081s] Feb 23 11:08:41.556: INFO: Got endpoints: latency-svc-dwhk6 [2.523887872s] Feb 23 11:08:41.560: INFO: Got endpoints: latency-svc-zk9s8 [2.323212233s] Feb 23 11:08:41.633: INFO: Created: latency-svc-b6p66 Feb 23 11:08:41.714: INFO: Got endpoints: latency-svc-b6p66 [2.10509704s] Feb 23 11:08:41.736: INFO: Created: latency-svc-nw6m8 Feb 23 11:08:41.757: INFO: Got endpoints: latency-svc-nw6m8 [1.913407803s] Feb 23 11:08:41.900: INFO: Created: latency-svc-pll8w Feb 23 11:08:41.910: INFO: Got endpoints: latency-svc-pll8w [2.002172004s] Feb 23 11:08:41.961: INFO: Created: latency-svc-zgn44 Feb 23 11:08:41.962: INFO: Got endpoints: latency-svc-zgn44 [1.915835313s] Feb 23 11:08:42.084: INFO: Created: latency-svc-mpcj9 Feb 23 11:08:42.089: INFO: Got endpoints: latency-svc-mpcj9 [1.849500439s] Feb 23 11:08:42.142: INFO: Created: latency-svc-fxg5k Feb 23 11:08:42.293: INFO: Got endpoints: latency-svc-fxg5k [1.782340262s] Feb 23 11:08:42.339: INFO: Created: latency-svc-qd994 Feb 23 11:08:42.339: INFO: Got endpoints: latency-svc-qd994 [1.697726644s] Feb 23 11:08:42.418: INFO: Created: latency-svc-jl55k Feb 23 11:08:42.477: INFO: Got endpoints: latency-svc-jl55k [184.010582ms] Feb 23 11:08:42.509: INFO: Created: latency-svc-wn72j Feb 23 11:08:42.560: INFO: Got endpoints: latency-svc-wn72j [1.726166717s] Feb 23 11:08:42.677: INFO: Created: latency-svc-q62hl Feb 23 11:08:42.696: INFO: Got endpoints: latency-svc-q62hl [1.788934143s] Feb 23 11:08:42.735: INFO: Created: latency-svc-h65xd Feb 23 11:08:42.832: INFO: Got endpoints: latency-svc-h65xd [1.83720691s] Feb 23 11:08:42.864: INFO: Created: latency-svc-lwqzd Feb 23 11:08:42.875: INFO: Got endpoints: latency-svc-lwqzd [1.810956778s] Feb 23 11:08:42.943: INFO: Created: latency-svc-jrxqg Feb 23 11:08:43.036: INFO: Got endpoints: latency-svc-jrxqg [1.859423385s] Feb 23 11:08:43.081: INFO: Created: latency-svc-ksmw5 Feb 23 11:08:43.088: INFO: Got endpoints: latency-svc-ksmw5 [1.532346864s] Feb 23 11:08:43.206: INFO: Created: latency-svc-4n7cx Feb 23 11:08:43.215: INFO: Got endpoints: latency-svc-4n7cx [1.658688374s] Feb 23 11:08:43.287: INFO: Created: latency-svc-mz8r9 Feb 23 11:08:43.359: INFO: Got endpoints: latency-svc-mz8r9 [1.799190479s] Feb 23 11:08:43.380: INFO: Created: latency-svc-x6869 Feb 23 11:08:43.402: INFO: Got endpoints: latency-svc-x6869 [1.688203979s] Feb 23 11:08:43.556: INFO: Created: latency-svc-qcr5b Feb 23 11:08:43.571: INFO: Got endpoints: latency-svc-qcr5b [1.814063609s] Feb 23 11:08:43.720: INFO: Created: latency-svc-5fdkz Feb 23 11:08:43.739: INFO: Got endpoints: latency-svc-5fdkz [1.828808391s] Feb 23 11:08:43.800: INFO: Created: latency-svc-ltvz8 Feb 23 11:08:43.814: INFO: Got endpoints: latency-svc-ltvz8 [1.851431113s] Feb 23 11:08:43.987: INFO: Created: latency-svc-45xkk Feb 23 11:08:44.185: INFO: Got endpoints: latency-svc-45xkk [2.095696035s] Feb 23 11:08:44.220: INFO: Created: latency-svc-hn2q7 Feb 23 11:08:44.235: INFO: Got endpoints: latency-svc-hn2q7 [1.896126253s] Feb 23 11:08:44.473: INFO: Created: latency-svc-gmrfs Feb 23 11:08:44.492: INFO: Got endpoints: latency-svc-gmrfs [2.01459514s] Feb 23 11:08:44.676: INFO: Created: latency-svc-4g2cf Feb 23 11:08:44.714: INFO: Got endpoints: latency-svc-4g2cf [2.153001079s] Feb 23 11:08:44.860: INFO: Created: latency-svc-l2wt6 Feb 23 11:08:44.898: INFO: Got endpoints: latency-svc-l2wt6 [2.201861966s] Feb 23 11:08:45.008: INFO: Created: latency-svc-5cdft Feb 23 11:08:45.017: INFO: Got endpoints: latency-svc-5cdft [2.185193171s] Feb 23 11:08:45.070: INFO: Created: latency-svc-vcw2g Feb 23 11:08:45.085: INFO: Got endpoints: latency-svc-vcw2g [2.2097927s] Feb 23 11:08:45.197: INFO: Created: latency-svc-cdgtx Feb 23 11:08:45.214: INFO: Got endpoints: latency-svc-cdgtx [2.178247477s] Feb 23 11:08:45.256: INFO: Created: latency-svc-hfjjn Feb 23 11:08:45.264: INFO: Got endpoints: latency-svc-hfjjn [2.175670537s] Feb 23 11:08:45.359: INFO: Created: latency-svc-5xppv Feb 23 11:08:45.394: INFO: Got endpoints: latency-svc-5xppv [2.178807252s] Feb 23 11:08:45.428: INFO: Created: latency-svc-8p8wg Feb 23 11:08:45.534: INFO: Got endpoints: latency-svc-8p8wg [2.174993465s] Feb 23 11:08:45.564: INFO: Created: latency-svc-6hfzk Feb 23 11:08:45.594: INFO: Got endpoints: latency-svc-6hfzk [2.191564005s] Feb 23 11:08:45.734: INFO: Created: latency-svc-6qjxd Feb 23 11:08:45.811: INFO: Got endpoints: latency-svc-6qjxd [2.239405518s] Feb 23 11:08:46.384: INFO: Created: latency-svc-cprfz Feb 23 11:08:46.658: INFO: Got endpoints: latency-svc-cprfz [2.918703299s] Feb 23 11:08:46.854: INFO: Created: latency-svc-xf4zf Feb 23 11:08:46.900: INFO: Got endpoints: latency-svc-xf4zf [3.085815898s] Feb 23 11:08:47.057: INFO: Created: latency-svc-jmk29 Feb 23 11:08:47.073: INFO: Got endpoints: latency-svc-jmk29 [2.887376781s] Feb 23 11:08:47.127: INFO: Created: latency-svc-f6rwm Feb 23 11:08:47.252: INFO: Got endpoints: latency-svc-f6rwm [3.016816194s] Feb 23 11:08:47.283: INFO: Created: latency-svc-tx5h2 Feb 23 11:08:47.290: INFO: Got endpoints: latency-svc-tx5h2 [2.797620039s] Feb 23 11:08:47.327: INFO: Created: latency-svc-wwkpk Feb 23 11:08:47.337: INFO: Got endpoints: latency-svc-wwkpk [2.623215292s] Feb 23 11:08:47.506: INFO: Created: latency-svc-mgsbj Feb 23 11:08:47.508: INFO: Got endpoints: latency-svc-mgsbj [2.60983039s] Feb 23 11:08:47.677: INFO: Created: latency-svc-wrp4q Feb 23 11:08:47.691: INFO: Got endpoints: latency-svc-wrp4q [2.673496783s] Feb 23 11:08:47.758: INFO: Created: latency-svc-nh67f Feb 23 11:08:47.846: INFO: Got endpoints: latency-svc-nh67f [2.76078404s] Feb 23 11:08:47.850: INFO: Created: latency-svc-m2z48 Feb 23 11:08:47.873: INFO: Got endpoints: latency-svc-m2z48 [2.658599441s] Feb 23 11:08:48.200: INFO: Created: latency-svc-xdl65 Feb 23 11:08:48.209: INFO: Got endpoints: latency-svc-xdl65 [2.944285926s] Feb 23 11:08:48.260: INFO: Created: latency-svc-hv4g6 Feb 23 11:08:48.392: INFO: Got endpoints: latency-svc-hv4g6 [2.998231676s] Feb 23 11:08:48.420: INFO: Created: latency-svc-77h8c Feb 23 11:08:48.453: INFO: Got endpoints: latency-svc-77h8c [2.917775702s] Feb 23 11:08:48.616: INFO: Created: latency-svc-4nb4r Feb 23 11:08:48.744: INFO: Got endpoints: latency-svc-4nb4r [3.15040519s] Feb 23 11:08:48.770: INFO: Created: latency-svc-4n2mt Feb 23 11:08:48.774: INFO: Got endpoints: latency-svc-4n2mt [2.963181147s] Feb 23 11:08:48.837: INFO: Created: latency-svc-x2bv2 Feb 23 11:08:48.979: INFO: Got endpoints: latency-svc-x2bv2 [2.321061268s] Feb 23 11:08:49.000: INFO: Created: latency-svc-pdhjh Feb 23 11:08:49.019: INFO: Got endpoints: latency-svc-pdhjh [2.11868586s] Feb 23 11:08:49.076: INFO: Created: latency-svc-cdwzv Feb 23 11:08:49.183: INFO: Got endpoints: latency-svc-cdwzv [2.110161122s] Feb 23 11:08:49.192: INFO: Created: latency-svc-fv9bc Feb 23 11:08:49.211: INFO: Got endpoints: latency-svc-fv9bc [1.958505975s] Feb 23 11:08:49.244: INFO: Created: latency-svc-kqzwq Feb 23 11:08:49.260: INFO: Got endpoints: latency-svc-kqzwq [1.969848744s] Feb 23 11:08:49.369: INFO: Created: latency-svc-z6xxr Feb 23 11:08:49.377: INFO: Got endpoints: latency-svc-z6xxr [2.039326496s] Feb 23 11:08:49.426: INFO: Created: latency-svc-ncrd5 Feb 23 11:08:49.447: INFO: Got endpoints: latency-svc-ncrd5 [1.93807943s] Feb 23 11:08:49.608: INFO: Created: latency-svc-wck6c Feb 23 11:08:49.651: INFO: Got endpoints: latency-svc-wck6c [1.96040793s] Feb 23 11:08:49.789: INFO: Created: latency-svc-7kt42 Feb 23 11:08:49.879: INFO: Got endpoints: latency-svc-7kt42 [2.032499567s] Feb 23 11:08:49.952: INFO: Created: latency-svc-qplqp Feb 23 11:08:49.962: INFO: Got endpoints: latency-svc-qplqp [2.089059867s] Feb 23 11:08:50.040: INFO: Created: latency-svc-d95pb Feb 23 11:08:50.205: INFO: Created: latency-svc-79t94 Feb 23 11:08:50.229: INFO: Got endpoints: latency-svc-79t94 [1.836475884s] Feb 23 11:08:50.229: INFO: Got endpoints: latency-svc-d95pb [2.020400073s] Feb 23 11:08:50.375: INFO: Created: latency-svc-kgs9j Feb 23 11:08:50.382: INFO: Got endpoints: latency-svc-kgs9j [1.929320991s] Feb 23 11:08:50.428: INFO: Created: latency-svc-vfr4n Feb 23 11:08:50.452: INFO: Got endpoints: latency-svc-vfr4n [1.707154289s] Feb 23 11:08:50.636: INFO: Created: latency-svc-vvd4c Feb 23 11:08:50.729: INFO: Got endpoints: latency-svc-vvd4c [1.954166048s] Feb 23 11:08:50.765: INFO: Created: latency-svc-j8j9l Feb 23 11:08:50.779: INFO: Got endpoints: latency-svc-j8j9l [1.799552914s] Feb 23 11:08:50.837: INFO: Created: latency-svc-h4q2h Feb 23 11:08:50.890: INFO: Got endpoints: latency-svc-h4q2h [1.870682619s] Feb 23 11:08:50.919: INFO: Created: latency-svc-8hffv Feb 23 11:08:50.946: INFO: Got endpoints: latency-svc-8hffv [1.761920528s] Feb 23 11:08:50.985: INFO: Created: latency-svc-hxxr4 Feb 23 11:08:51.053: INFO: Got endpoints: latency-svc-hxxr4 [1.841720063s] Feb 23 11:08:51.099: INFO: Created: latency-svc-67qvk Feb 23 11:08:51.122: INFO: Got endpoints: latency-svc-67qvk [1.861833107s] Feb 23 11:08:51.256: INFO: Created: latency-svc-gdf5z Feb 23 11:08:51.298: INFO: Got endpoints: latency-svc-gdf5z [1.921063759s] Feb 23 11:08:51.444: INFO: Created: latency-svc-8g7pd Feb 23 11:08:51.472: INFO: Got endpoints: latency-svc-8g7pd [2.024629388s] Feb 23 11:08:51.686: INFO: Created: latency-svc-886jh Feb 23 11:08:51.770: INFO: Created: latency-svc-wqxpn Feb 23 11:08:51.771: INFO: Got endpoints: latency-svc-886jh [2.118831638s] Feb 23 11:08:51.878: INFO: Got endpoints: latency-svc-wqxpn [1.997880016s] Feb 23 11:08:51.910: INFO: Created: latency-svc-cnbl8 Feb 23 11:08:51.931: INFO: Got endpoints: latency-svc-cnbl8 [1.96908473s] Feb 23 11:08:51.958: INFO: Created: latency-svc-lvzj9 Feb 23 11:08:51.961: INFO: Got endpoints: latency-svc-lvzj9 [1.730923407s] Feb 23 11:08:52.090: INFO: Created: latency-svc-2bs7b Feb 23 11:08:52.105: INFO: Got endpoints: latency-svc-2bs7b [1.875262653s] Feb 23 11:08:52.311: INFO: Created: latency-svc-ffx72 Feb 23 11:08:52.322: INFO: Got endpoints: latency-svc-ffx72 [1.939408426s] Feb 23 11:08:52.416: INFO: Created: latency-svc-kpc5n Feb 23 11:08:52.530: INFO: Got endpoints: latency-svc-kpc5n [2.077874553s] Feb 23 11:08:52.615: INFO: Created: latency-svc-zfcqm Feb 23 11:08:52.720: INFO: Got endpoints: latency-svc-zfcqm [1.990689306s] Feb 23 11:08:52.744: INFO: Created: latency-svc-5dwvf Feb 23 11:08:52.766: INFO: Got endpoints: latency-svc-5dwvf [1.986547395s] Feb 23 11:08:52.937: INFO: Created: latency-svc-mf6cj Feb 23 11:08:52.966: INFO: Got endpoints: latency-svc-mf6cj [2.076051514s] Feb 23 11:08:53.095: INFO: Created: latency-svc-z9gdd Feb 23 11:08:53.118: INFO: Got endpoints: latency-svc-z9gdd [2.171836542s] Feb 23 11:08:53.163: INFO: Created: latency-svc-87v2v Feb 23 11:08:53.171: INFO: Got endpoints: latency-svc-87v2v [2.118233064s] Feb 23 11:08:53.292: INFO: Created: latency-svc-ht4fg Feb 23 11:08:53.325: INFO: Got endpoints: latency-svc-ht4fg [2.202103732s] Feb 23 11:08:53.350: INFO: Created: latency-svc-qbctd Feb 23 11:08:53.365: INFO: Got endpoints: latency-svc-qbctd [2.066437695s] Feb 23 11:08:53.461: INFO: Created: latency-svc-7nbkn Feb 23 11:08:53.480: INFO: Got endpoints: latency-svc-7nbkn [2.008056493s] Feb 23 11:08:53.539: INFO: Created: latency-svc-x6v2n Feb 23 11:08:53.789: INFO: Got endpoints: latency-svc-x6v2n [2.01860248s] Feb 23 11:08:53.806: INFO: Created: latency-svc-g29vk Feb 23 11:08:53.822: INFO: Got endpoints: latency-svc-g29vk [1.944233096s] Feb 23 11:08:53.981: INFO: Created: latency-svc-ftxkr Feb 23 11:08:53.990: INFO: Got endpoints: latency-svc-ftxkr [2.058595471s] Feb 23 11:08:54.041: INFO: Created: latency-svc-fvnfn Feb 23 11:08:54.056: INFO: Got endpoints: latency-svc-fvnfn [2.095025496s] Feb 23 11:08:54.295: INFO: Created: latency-svc-rws6l Feb 23 11:08:54.302: INFO: Got endpoints: latency-svc-rws6l [2.197330421s] Feb 23 11:08:54.379: INFO: Created: latency-svc-svcs9 Feb 23 11:08:54.474: INFO: Got endpoints: latency-svc-svcs9 [2.151853801s] Feb 23 11:08:54.504: INFO: Created: latency-svc-qhld5 Feb 23 11:08:54.523: INFO: Got endpoints: latency-svc-qhld5 [1.992522021s] Feb 23 11:08:54.663: INFO: Created: latency-svc-n4src Feb 23 11:08:54.682: INFO: Got endpoints: latency-svc-n4src [1.961755085s] Feb 23 11:08:54.722: INFO: Created: latency-svc-fglc9 Feb 23 11:08:54.730: INFO: Got endpoints: latency-svc-fglc9 [1.96446212s] Feb 23 11:08:54.835: INFO: Created: latency-svc-c8xgb Feb 23 11:08:54.847: INFO: Got endpoints: latency-svc-c8xgb [1.88046605s] Feb 23 11:08:54.924: INFO: Created: latency-svc-ln487 Feb 23 11:08:55.020: INFO: Got endpoints: latency-svc-ln487 [1.901737521s] Feb 23 11:08:55.076: INFO: Created: latency-svc-4vrzx Feb 23 11:08:55.103: INFO: Got endpoints: latency-svc-4vrzx [1.930941762s] Feb 23 11:08:55.201: INFO: Created: latency-svc-2tc7t Feb 23 11:08:55.248: INFO: Got endpoints: latency-svc-2tc7t [1.922903542s] Feb 23 11:08:55.256: INFO: Created: latency-svc-whdjw Feb 23 11:08:55.279: INFO: Got endpoints: latency-svc-whdjw [1.914117452s] Feb 23 11:08:55.382: INFO: Created: latency-svc-vbh59 Feb 23 11:08:55.389: INFO: Got endpoints: latency-svc-vbh59 [1.908618284s] Feb 23 11:08:55.435: INFO: Created: latency-svc-bwqzk Feb 23 11:08:55.446: INFO: Got endpoints: latency-svc-bwqzk [1.656316779s] Feb 23 11:08:55.554: INFO: Created: latency-svc-695sg Feb 23 11:08:55.564: INFO: Got endpoints: latency-svc-695sg [1.7415954s] Feb 23 11:08:55.615: INFO: Created: latency-svc-qnjwd Feb 23 11:08:55.622: INFO: Got endpoints: latency-svc-qnjwd [1.632063514s] Feb 23 11:08:55.859: INFO: Created: latency-svc-v6xlf Feb 23 11:08:55.893: INFO: Got endpoints: latency-svc-v6xlf [1.836415855s] Feb 23 11:08:55.952: INFO: Created: latency-svc-xbsgq Feb 23 11:08:56.068: INFO: Got endpoints: latency-svc-xbsgq [1.765961927s] Feb 23 11:08:56.088: INFO: Created: latency-svc-nnj5d Feb 23 11:08:56.104: INFO: Got endpoints: latency-svc-nnj5d [1.629681378s] Feb 23 11:08:56.151: INFO: Created: latency-svc-jz4fs Feb 23 11:08:57.302: INFO: Got endpoints: latency-svc-jz4fs [2.778480916s] Feb 23 11:08:57.340: INFO: Created: latency-svc-nwdf8 Feb 23 11:08:57.438: INFO: Got endpoints: latency-svc-nwdf8 [2.755374252s] Feb 23 11:08:57.455: INFO: Created: latency-svc-jpgvk Feb 23 11:08:57.479: INFO: Got endpoints: latency-svc-jpgvk [2.748212856s] Feb 23 11:08:57.517: INFO: Created: latency-svc-67fzq Feb 23 11:08:57.637: INFO: Got endpoints: latency-svc-67fzq [2.789911469s] Feb 23 11:08:57.676: INFO: Created: latency-svc-htxzd Feb 23 11:08:57.713: INFO: Got endpoints: latency-svc-htxzd [2.692612685s] Feb 23 11:08:57.940: INFO: Created: latency-svc-q2zkp Feb 23 11:08:58.138: INFO: Got endpoints: latency-svc-q2zkp [3.034743924s] Feb 23 11:08:58.151: INFO: Created: latency-svc-6jhpt Feb 23 11:08:58.164: INFO: Got endpoints: latency-svc-6jhpt [2.916656467s] Feb 23 11:08:59.174: INFO: Created: latency-svc-9nz9h Feb 23 11:08:59.186: INFO: Got endpoints: latency-svc-9nz9h [3.906806567s] Feb 23 11:08:59.543: INFO: Created: latency-svc-9rw4k Feb 23 11:08:59.564: INFO: Got endpoints: latency-svc-9rw4k [4.175022726s] Feb 23 11:08:59.659: INFO: Created: latency-svc-w9m9x Feb 23 11:08:59.666: INFO: Got endpoints: latency-svc-w9m9x [4.219328123s] Feb 23 11:08:59.721: INFO: Created: latency-svc-8nvp9 Feb 23 11:08:59.823: INFO: Got endpoints: latency-svc-8nvp9 [4.259793661s] Feb 23 11:08:59.853: INFO: Created: latency-svc-rbpqp Feb 23 11:08:59.889: INFO: Got endpoints: latency-svc-rbpqp [4.266645779s] Feb 23 11:09:00.061: INFO: Created: latency-svc-2q6tj Feb 23 11:09:00.075: INFO: Got endpoints: latency-svc-2q6tj [4.181587663s] Feb 23 11:09:00.120: INFO: Created: latency-svc-xr7wj Feb 23 11:09:00.149: INFO: Got endpoints: latency-svc-xr7wj [4.080394032s] Feb 23 11:09:00.388: INFO: Created: latency-svc-2hxmc Feb 23 11:09:00.417: INFO: Got endpoints: latency-svc-2hxmc [4.313142937s] Feb 23 11:09:00.535: INFO: Created: latency-svc-c4d6v Feb 23 11:09:00.601: INFO: Got endpoints: latency-svc-c4d6v [3.299098435s] Feb 23 11:09:00.767: INFO: Created: latency-svc-x9j22 Feb 23 11:09:00.768: INFO: Got endpoints: latency-svc-x9j22 [3.329832164s] Feb 23 11:09:00.813: INFO: Created: latency-svc-dm6gs Feb 23 11:09:00.928: INFO: Got endpoints: latency-svc-dm6gs [3.449648771s] Feb 23 11:09:00.937: INFO: Created: latency-svc-gfnpl Feb 23 11:09:00.959: INFO: Got endpoints: latency-svc-gfnpl [3.321167504s] Feb 23 11:09:01.008: INFO: Created: latency-svc-kdkzl Feb 23 11:09:01.025: INFO: Got endpoints: latency-svc-kdkzl [3.31180686s] Feb 23 11:09:01.121: INFO: Created: latency-svc-cnlz9 Feb 23 11:09:01.135: INFO: Got endpoints: latency-svc-cnlz9 [2.996963386s] Feb 23 11:09:01.215: INFO: Created: latency-svc-wx7xb Feb 23 11:09:01.313: INFO: Got endpoints: latency-svc-wx7xb [3.148940376s] Feb 23 11:09:01.319: INFO: Created: latency-svc-t7pxf Feb 23 11:09:01.324: INFO: Got endpoints: latency-svc-t7pxf [2.138470736s] Feb 23 11:09:01.503: INFO: Created: latency-svc-g7ndw Feb 23 11:09:01.519: INFO: Got endpoints: latency-svc-g7ndw [1.954664711s] Feb 23 11:09:01.579: INFO: Created: latency-svc-j2289 Feb 23 11:09:01.672: INFO: Got endpoints: latency-svc-j2289 [2.005771151s] Feb 23 11:09:01.727: INFO: Created: latency-svc-dhb2f Feb 23 11:09:01.907: INFO: Created: latency-svc-flrl4 Feb 23 11:09:01.915: INFO: Got endpoints: latency-svc-dhb2f [2.091502575s] Feb 23 11:09:01.918: INFO: Got endpoints: latency-svc-flrl4 [2.028742606s] Feb 23 11:09:01.970: INFO: Created: latency-svc-lhzgf Feb 23 11:09:02.043: INFO: Got endpoints: latency-svc-lhzgf [1.967929353s] Feb 23 11:09:02.077: INFO: Created: latency-svc-r7r5k Feb 23 11:09:02.087: INFO: Got endpoints: latency-svc-r7r5k [1.937465656s] Feb 23 11:09:02.087: INFO: Latencies: [184.010582ms 240.624831ms 283.819605ms 494.581542ms 643.799239ms 676.124288ms 968.260686ms 1.051044186s 1.217570375s 1.432642646s 1.470211576s 1.532346864s 1.629681378s 1.632063514s 1.656316779s 1.658688374s 1.669227034s 1.688203979s 1.697726644s 1.707154289s 1.726166717s 1.730923407s 1.7415954s 1.761920528s 1.765961927s 1.782340262s 1.788934143s 1.799190479s 1.799552914s 1.810956778s 1.814063609s 1.828808391s 1.836415855s 1.836475884s 1.83720691s 1.841720063s 1.843253986s 1.849500439s 1.851431113s 1.859423385s 1.861833107s 1.870682619s 1.875262653s 1.88046605s 1.896126253s 1.901737521s 1.908618284s 1.913407803s 1.914117452s 1.915835313s 1.921063759s 1.922903542s 1.929320991s 1.930941762s 1.937465656s 1.93807943s 1.939408426s 1.944233096s 1.954166048s 1.954664711s 1.958505975s 1.96040793s 1.961755085s 1.96446212s 1.967929353s 1.96908473s 1.969848744s 1.971626081s 1.986547395s 1.990689306s 1.992522021s 1.997880016s 2.002172004s 2.005771151s 2.008056493s 2.01459514s 2.01860248s 2.020400073s 2.024629388s 2.028742606s 2.032499567s 2.039326496s 2.048945119s 2.058595471s 2.066437695s 2.076051514s 2.077874553s 2.089059867s 2.091502575s 2.095025496s 2.095696035s 2.10509704s 2.105992626s 2.108923277s 2.110161122s 2.118233064s 2.11868586s 2.118831638s 2.124781246s 2.126729973s 2.127779634s 2.138470736s 2.151853801s 2.153001079s 2.162884906s 2.171836542s 2.174993465s 2.175670537s 2.177701409s 2.178247477s 2.178807252s 2.181996834s 2.185193171s 2.187451492s 2.191564005s 2.197330421s 2.201861966s 2.202103732s 2.2097927s 2.231946985s 2.239405518s 2.242397525s 2.255235084s 2.262682057s 2.279647192s 2.295869485s 2.306606161s 2.321061268s 2.321900064s 2.323212233s 2.341981809s 2.372396308s 2.372510243s 2.406065995s 2.409165206s 2.416009189s 2.424956127s 2.462428528s 2.497822923s 2.505138122s 2.523887872s 2.530349002s 2.536914761s 2.55211201s 2.55843272s 2.565054306s 2.609140367s 2.60983039s 2.623215292s 2.653187263s 2.658599441s 2.660103782s 2.660125868s 2.662002055s 2.663782888s 2.664816347s 2.671314753s 2.673496783s 2.692612685s 2.709852154s 2.71247383s 2.72187067s 2.722994041s 2.72529884s 2.730596921s 2.740844561s 2.748212856s 2.755374252s 2.76078404s 2.778480916s 2.789911469s 2.797620039s 2.810342753s 2.822089213s 2.887376781s 2.916656467s 2.917775702s 2.918703299s 2.944285926s 2.963181147s 2.996963386s 2.998231676s 3.016816194s 3.034743924s 3.085815898s 3.148940376s 3.15040519s 3.299098435s 3.31180686s 3.321167504s 3.329832164s 3.449648771s 3.906806567s 4.080394032s 4.175022726s 4.181587663s 4.219328123s 4.259793661s 4.266645779s 4.313142937s] Feb 23 11:09:02.087: INFO: 50 %ile: 2.127779634s Feb 23 11:09:02.087: INFO: 90 %ile: 2.996963386s Feb 23 11:09:02.087: INFO: 99 %ile: 4.266645779s Feb 23 11:09:02.087: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:09:02.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-p84jb" for this suite. Feb 23 11:09:50.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:09:50.290: INFO: namespace: e2e-tests-svc-latency-p84jb, resource: bindings, ignored listing per whitelist Feb 23 11:09:50.448: INFO: namespace e2e-tests-svc-latency-p84jb deletion completed in 48.350555959s • [SLOW TEST:90.869 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:09:50.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 23 11:10:01.056: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:10:25.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-qswz5" for this suite. Feb 23 11:10:31.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:10:31.411: INFO: namespace: e2e-tests-namespaces-qswz5, resource: bindings, ignored listing per whitelist Feb 23 11:10:31.489: INFO: namespace e2e-tests-namespaces-qswz5 deletion completed in 6.168706731s STEP: Destroying namespace "e2e-tests-nsdeletetest-gllkk" for this suite. Feb 23 11:10:31.493: INFO: Namespace e2e-tests-nsdeletetest-gllkk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-94qc4" for this suite. Feb 23 11:10:37.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:10:37.620: INFO: namespace: e2e-tests-nsdeletetest-94qc4, resource: bindings, ignored listing per whitelist Feb 23 11:10:37.670: INFO: namespace e2e-tests-nsdeletetest-94qc4 deletion completed in 6.176891281s • [SLOW TEST:47.220 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:10:37.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 11:10:37.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-f9cnx' Feb 23 11:10:38.041: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 23 11:10:38.041: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 23 11:10:40.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-f9cnx' Feb 23 11:10:40.571: INFO: stderr: "" Feb 23 11:10:40.572: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:10:40.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f9cnx" for this suite. Feb 23 11:10:46.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:10:46.914: INFO: namespace: e2e-tests-kubectl-f9cnx, resource: bindings, ignored listing per whitelist Feb 23 11:10:46.949: INFO: namespace e2e-tests-kubectl-f9cnx deletion completed in 6.305481194s • [SLOW TEST:9.279 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:10:46.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 23 11:10:57.826: INFO: Successfully updated pod "labelsupdate24524e6f-562d-11ea-8363-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:10:59.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-njg6t" for this suite. Feb 23 11:11:22.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:11:22.087: INFO: namespace: e2e-tests-projected-njg6t, resource: bindings, ignored listing per whitelist Feb 23 11:11:22.154: INFO: namespace e2e-tests-projected-njg6t deletion completed in 22.168970847s • [SLOW TEST:35.204 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:11:22.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:11:22.327: INFO: Creating deployment "nginx-deployment" Feb 23 11:11:22.342: INFO: Waiting for observed generation 1 Feb 23 11:11:24.759: INFO: Waiting for all required pods to come up Feb 23 11:11:25.599: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 23 11:12:04.573: INFO: Waiting for deployment "nginx-deployment" to complete Feb 23 11:12:04.598: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 23 11:12:04.623: INFO: Updating deployment nginx-deployment Feb 23 11:12:04.623: INFO: Waiting for observed generation 2 Feb 23 11:12:08.324: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 23 11:12:09.923: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 23 11:12:09.963: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 23 11:12:10.002: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 23 11:12:10.002: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 23 11:12:10.218: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 23 11:12:10.235: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 23 11:12:10.235: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 23 11:12:10.878: INFO: Updating deployment nginx-deployment Feb 23 11:12:10.878: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 23 11:12:11.002: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 23 11:12:13.932: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 23 11:12:14.508: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-bxslx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bxslx/deployments/nginx-deployment,UID:39501789-562d-11ea-a994-fa163e34d433,ResourceVersion:22636498,Generation:3,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-23 11:12:05 +0000 UTC 2020-02-23 11:11:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-23 11:12:12 +0000 UTC 2020-02-23 11:12:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 23 11:12:15.286: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-bxslx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bxslx/replicasets/nginx-deployment-5c98f8fb5,UID:52885a4c-562d-11ea-a994-fa163e34d433,ResourceVersion:22636493,Generation:3,CreationTimestamp:2020-02-23 11:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 39501789-562d-11ea-a994-fa163e34d433 0xc001001eb7 0xc001001eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:12:15.286: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 23 11:12:15.286: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-bxslx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bxslx/replicasets/nginx-deployment-85ddf47c5d,UID:3953f34f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636536,Generation:3,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 39501789-562d-11ea-a994-fa163e34d433 0xc001001f77 0xc001001f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 23 11:12:15.925: INFO: Pod "nginx-deployment-5c98f8fb5-225rv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-225rv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-225rv,UID:5848c80f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636549,Generation:0,CreationTimestamp:2020-02-23 11:12:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c2e47 0xc0016c2e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c2fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c2fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.925: INFO: Pod "nginx-deployment-5c98f8fb5-65hql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-65hql,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-65hql,UID:581b41a3-562d-11ea-a994-fa163e34d433,ResourceVersion:22636541,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3037 0xc0016c3038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c30a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c30c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.925: INFO: Pod "nginx-deployment-5c98f8fb5-7vjtg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7vjtg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-7vjtg,UID:52a0e64d-562d-11ea-a994-fa163e34d433,ResourceVersion:22636468,Generation:0,CreationTimestamp:2020-02-23 11:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c31f7 0xc0016c31f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.925: INFO: Pod "nginx-deployment-5c98f8fb5-7vsld" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7vsld,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-7vsld,UID:52ee8678-562d-11ea-a994-fa163e34d433,ResourceVersion:22636484,Generation:0,CreationTimestamp:2020-02-23 11:12:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3487 0xc0016c3488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c34f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.926: INFO: Pod "nginx-deployment-5c98f8fb5-7wpl9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7wpl9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-7wpl9,UID:581be177-562d-11ea-a994-fa163e34d433,ResourceVersion:22636542,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c35d7 0xc0016c35d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.926: INFO: Pod "nginx-deployment-5c98f8fb5-7xh8r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7xh8r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-7xh8r,UID:529bb4fa-562d-11ea-a994-fa163e34d433,ResourceVersion:22636458,Generation:0,CreationTimestamp:2020-02-23 11:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c36d7 0xc0016c36d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.926: INFO: Pod "nginx-deployment-5c98f8fb5-bnkrw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bnkrw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-bnkrw,UID:584ac942-562d-11ea-a994-fa163e34d433,ResourceVersion:22636550,Generation:0,CreationTimestamp:2020-02-23 11:12:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3827 0xc0016c3828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c38b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.926: INFO: Pod "nginx-deployment-5c98f8fb5-gq592" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gq592,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-gq592,UID:584dc053-562d-11ea-a994-fa163e34d433,ResourceVersion:22636548,Generation:0,CreationTimestamp:2020-02-23 11:12:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3927 0xc0016c3928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c39b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-5c98f8fb5-jk4bd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jk4bd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-jk4bd,UID:52a0f477-562d-11ea-a994-fa163e34d433,ResourceVersion:22636479,Generation:0,CreationTimestamp:2020-02-23 11:12:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3a27 0xc0016c3a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-5c98f8fb5-pjbsn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pjbsn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-pjbsn,UID:52d916af-562d-11ea-a994-fa163e34d433,ResourceVersion:22636480,Generation:0,CreationTimestamp:2020-02-23 11:12:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3ba7 0xc0016c3ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-5c98f8fb5-pvms7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pvms7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-pvms7,UID:5880086c-562d-11ea-a994-fa163e34d433,ResourceVersion:22636553,Generation:0,CreationTimestamp:2020-02-23 11:12:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3d27 0xc0016c3d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-5c98f8fb5-rdwq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rdwq7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-rdwq7,UID:57d6bf88-562d-11ea-a994-fa163e34d433,ResourceVersion:22636525,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3ea7 0xc0016c3ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016c3f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016c3f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-5c98f8fb5-zfbll" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zfbll,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-5c98f8fb5-zfbll,UID:5849c3e9-562d-11ea-a994-fa163e34d433,ResourceVersion:22636547,Generation:0,CreationTimestamp:2020-02-23 11:12:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52885a4c-562d-11ea-a994-fa163e34d433 0xc0016c3fd7 0xc0016c3fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00060} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.927: INFO: Pod "nginx-deployment-85ddf47c5d-2xsmc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2xsmc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-2xsmc,UID:3988157d-562d-11ea-a994-fa163e34d433,ResourceVersion:22636395,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00107 0xc001d00108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00170} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-23 11:11:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://962f53bcfad1aa4e25178a1db2972b68c30c6e1c1d3ffe1d50e81a4594d45fe7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.928: INFO: Pod "nginx-deployment-85ddf47c5d-4xpzh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4xpzh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-4xpzh,UID:57d9d25f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636527,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00337 0xc001d00338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d003a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d003c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.928: INFO: Pod "nginx-deployment-85ddf47c5d-89dhz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-89dhz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-89dhz,UID:396fd3e0-562d-11ea-a994-fa163e34d433,ResourceVersion:22636419,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d004a7 0xc001d004a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00570} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1b9fc0f737e4c4e8595d8ff2351943be08afce59b281ad83053aaad5918369b2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.928: INFO: Pod "nginx-deployment-85ddf47c5d-8sh2k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8sh2k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-8sh2k,UID:396ef2a0-562d-11ea-a994-fa163e34d433,ResourceVersion:22636399,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00657 0xc001d00658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d006c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d006e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bdd1fd97c5d3c65be99e79a2739fbd97d4129a6ee295f844a696034001296d45}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.928: INFO: Pod "nginx-deployment-85ddf47c5d-8vtvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8vtvf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-8vtvf,UID:572cb783-562d-11ea-a994-fa163e34d433,ResourceVersion:22636508,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d007b7 0xc001d007b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.928: INFO: Pod "nginx-deployment-85ddf47c5d-985j9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-985j9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-985j9,UID:5766103f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636511,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d008b7 0xc001d008b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00920} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.929: INFO: Pod "nginx-deployment-85ddf47c5d-b4k7p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b4k7p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-b4k7p,UID:398684f8-562d-11ea-a994-fa163e34d433,ResourceVersion:22636402,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d009b7 0xc001d009b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-23 11:11:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d98233d6dde7a7913964446a2b4b40b6bbfff842d997059149f654d8ca80a539}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.929: INFO: Pod "nginx-deployment-85ddf47c5d-c5ccj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c5ccj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-c5ccj,UID:57da2c8c-562d-11ea-a994-fa163e34d433,ResourceVersion:22636531,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00b07 0xc001d00b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.929: INFO: Pod "nginx-deployment-85ddf47c5d-glphs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-glphs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-glphs,UID:57d97a03-562d-11ea-a994-fa163e34d433,ResourceVersion:22636529,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00c07 0xc001d00c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.929: INFO: Pod "nginx-deployment-85ddf47c5d-hh8b8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hh8b8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-hh8b8,UID:57da09d1-562d-11ea-a994-fa163e34d433,ResourceVersion:22636526,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00d07 0xc001d00d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.929: INFO: Pod "nginx-deployment-85ddf47c5d-hrhvw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hrhvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-hrhvw,UID:3987f23f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636407,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00e07 0xc001d00e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8320a005beec316bda7b0aaadb93f4071d8995459649b63dc115974bd515850c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-jfrgs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jfrgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-jfrgs,UID:57da41c3-562d-11ea-a994-fa163e34d433,ResourceVersion:22636528,Generation:0,CreationTimestamp:2020-02-23 11:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d00f57 0xc001d00f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d00fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d00ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-l4lsw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l4lsw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-l4lsw,UID:39637b9f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636383,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d01067 0xc001d01068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d010d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d010f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-23 11:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://58e7b55cc8a6fd0cd331c80825714ff3574513dff4446d6968ed831ca68b6f9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-lx86g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lx86g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-lx86g,UID:396ecbf9-562d-11ea-a994-fa163e34d433,ResourceVersion:22636411,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d011b7 0xc001d011b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d01220} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6f690cfb138bf07908cf453fe208da7d10f299f69b7da1c2fcc7bd529fd24e76}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-n6b89" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6b89,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-n6b89,UID:57675c5b-562d-11ea-a994-fa163e34d433,ResourceVersion:22636510,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d01307 0xc001d01308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d01370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-rwvv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rwvv2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-rwvv2,UID:57298f64-562d-11ea-a994-fa163e34d433,ResourceVersion:22636533,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d01407 0xc001d01408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d01470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.930: INFO: Pod "nginx-deployment-85ddf47c5d-sc8jr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sc8jr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-sc8jr,UID:572cdc89-562d-11ea-a994-fa163e34d433,ResourceVersion:22636556,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d01547 0xc001d01548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d015b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d015d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-23 11:12:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.931: INFO: Pod "nginx-deployment-85ddf47c5d-tfg8x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tfg8x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-tfg8x,UID:396e6d91-562d-11ea-a994-fa163e34d433,ResourceVersion:22636414,Generation:0,CreationTimestamp:2020-02-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d01687 0xc001d01688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d016f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-23 11:11:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5fecaed2cc70252b80bb9856598bd85573cf148e1f70f84575e4c4fec33bcd98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.931: INFO: Pod "nginx-deployment-85ddf47c5d-zntjr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zntjr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-zntjr,UID:57673213-562d-11ea-a994-fa163e34d433,ResourceVersion:22636516,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d017d7 0xc001d017d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d01840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 23 11:12:15.931: INFO: Pod "nginx-deployment-85ddf47c5d-zvcdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zvcdj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bxslx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bxslx/pods/nginx-deployment-85ddf47c5d-zvcdj,UID:5766d49f-562d-11ea-a994-fa163e34d433,ResourceVersion:22636518,Generation:0,CreationTimestamp:2020-02-23 11:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3953f34f-562d-11ea-a994-fa163e34d433 0xc001d018d7 0xc001d018d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rvp2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rvp2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rvp2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d01940} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d01960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:12:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:12:15.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bxslx" for this suite. Feb 23 11:13:31.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:13:31.471: INFO: namespace: e2e-tests-deployment-bxslx, resource: bindings, ignored listing per whitelist Feb 23 11:13:31.499: INFO: namespace e2e-tests-deployment-bxslx deletion completed in 1m14.146985151s • [SLOW TEST:129.345 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:13:31.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 23 11:13:32.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-zctzs' Feb 23 11:13:35.997: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 23 11:13:35.997: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 23 11:13:38.344: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-gbkrt] Feb 23 11:13:38.345: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-gbkrt" in namespace "e2e-tests-kubectl-zctzs" to be "running and ready" Feb 23 11:13:38.391: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 46.203877ms Feb 23 11:13:40.403: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05850985s Feb 23 11:13:43.090: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.744905326s Feb 23 11:13:45.118: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.773234717s Feb 23 11:13:47.133: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.788545138s Feb 23 11:13:49.145: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.799931595s Feb 23 11:13:51.870: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.525538468s Feb 23 11:13:54.332: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.987126869s Feb 23 11:13:57.919: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 19.574286842s Feb 23 11:13:59.928: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 21.583049334s Feb 23 11:14:01.971: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 23.626393722s Feb 23 11:14:03.992: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 25.647476362s Feb 23 11:14:06.017: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 27.672012719s Feb 23 11:14:08.043: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Pending", Reason="", readiness=false. Elapsed: 29.698593743s Feb 23 11:14:10.059: INFO: Pod "e2e-test-nginx-rc-gbkrt": Phase="Running", Reason="", readiness=true. Elapsed: 31.714285027s Feb 23 11:14:10.059: INFO: Pod "e2e-test-nginx-rc-gbkrt" satisfied condition "running and ready" Feb 23 11:14:10.059: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-gbkrt] Feb 23 11:14:10.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zctzs' Feb 23 11:14:10.246: INFO: stderr: "" Feb 23 11:14:10.246: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 23 11:14:10.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zctzs' Feb 23 11:14:10.427: INFO: stderr: "" Feb 23 11:14:10.427: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:14:10.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zctzs" for this suite. Feb 23 11:14:34.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:14:34.645: INFO: namespace: e2e-tests-kubectl-zctzs, resource: bindings, ignored listing per whitelist Feb 23 11:14:34.696: INFO: namespace e2e-tests-kubectl-zctzs deletion completed in 24.262570721s • [SLOW TEST:63.197 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:14:34.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 23 11:14:34.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h4qxp' Feb 23 11:14:35.912: INFO: stderr: "" Feb 23 11:14:35.912: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 23 11:14:37.311: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:37.311: INFO: Found 0 / 1 Feb 23 11:14:37.937: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:37.937: INFO: Found 0 / 1 Feb 23 11:14:38.928: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:38.929: INFO: Found 0 / 1 Feb 23 11:14:39.936: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:39.936: INFO: Found 0 / 1 Feb 23 11:14:40.945: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:40.945: INFO: Found 0 / 1 Feb 23 11:14:41.941: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:41.941: INFO: Found 0 / 1 Feb 23 11:14:43.025: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:43.025: INFO: Found 0 / 1 Feb 23 11:14:43.985: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:43.986: INFO: Found 0 / 1 Feb 23 11:14:44.959: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:44.959: INFO: Found 0 / 1 Feb 23 11:14:45.941: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:45.941: INFO: Found 1 / 1 Feb 23 11:14:45.941: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 23 11:14:45.951: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:14:45.951: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 23 11:14:45.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp' Feb 23 11:14:46.155: INFO: stderr: "" Feb 23 11:14:46.155: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Feb 11:14:44.927 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Feb 11:14:44.927 # Server started, Redis version 3.2.12\n1:M 23 Feb 11:14:44.927 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Feb 11:14:44.927 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 23 11:14:46.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp --tail=1' Feb 23 11:14:46.282: INFO: stderr: "" Feb 23 11:14:46.282: INFO: stdout: "1:M 23 Feb 11:14:44.927 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 23 11:14:46.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp --limit-bytes=1' Feb 23 11:14:46.430: INFO: stderr: "" Feb 23 11:14:46.430: INFO: stdout: " " STEP: exposing timestamps Feb 23 11:14:46.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp --tail=1 --timestamps' Feb 23 11:14:46.601: INFO: stderr: "" Feb 23 11:14:46.601: INFO: stdout: "2020-02-23T11:14:44.929799037Z 1:M 23 Feb 11:14:44.927 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 23 11:14:49.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp --since=1s' Feb 23 11:14:49.289: INFO: stderr: "" Feb 23 11:14:49.289: INFO: stdout: "" Feb 23 11:14:49.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-z68d4 redis-master --namespace=e2e-tests-kubectl-h4qxp --since=24h' Feb 23 11:14:49.485: INFO: stderr: "" Feb 23 11:14:49.485: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Feb 11:14:44.927 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Feb 11:14:44.927 # Server started, Redis version 3.2.12\n1:M 23 Feb 11:14:44.927 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Feb 11:14:44.927 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 23 11:14:49.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-h4qxp' Feb 23 11:14:49.655: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:14:49.655: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 23 11:14:49.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-h4qxp' Feb 23 11:14:49.804: INFO: stderr: "No resources found.\n" Feb 23 11:14:49.805: INFO: stdout: "" Feb 23 11:14:49.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-h4qxp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 23 11:14:49.972: INFO: stderr: "" Feb 23 11:14:49.972: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:14:49.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h4qxp" for this suite. Feb 23 11:14:56.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:14:56.207: INFO: namespace: e2e-tests-kubectl-h4qxp, resource: bindings, ignored listing per whitelist Feb 23 11:14:56.234: INFO: namespace e2e-tests-kubectl-h4qxp deletion completed in 6.241842877s • [SLOW TEST:21.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:14:56.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-l4kdm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l4kdm to expose endpoints map[] Feb 23 11:14:56.598: INFO: Get endpoints failed (16.820011ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 23 11:14:57.613: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l4kdm exposes endpoints map[] (1.031186268s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-l4kdm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l4kdm to expose endpoints map[pod1:[80]] Feb 23 11:15:01.767: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.135996274s elapsed, will retry) Feb 23 11:15:07.518: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l4kdm exposes endpoints map[pod1:[80]] (9.886884368s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-l4kdm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l4kdm to expose endpoints map[pod2:[80] pod1:[80]] Feb 23 11:15:11.846: INFO: Unexpected endpoints: found map[b9a36c77-562d-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.313229546s elapsed, will retry) Feb 23 11:15:17.150: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l4kdm exposes endpoints map[pod1:[80] pod2:[80]] (9.617129146s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-l4kdm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l4kdm to expose endpoints map[pod2:[80]] Feb 23 11:15:18.741: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l4kdm exposes endpoints map[pod2:[80]] (1.579438975s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-l4kdm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-l4kdm to expose endpoints map[] Feb 23 11:15:20.174: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-l4kdm exposes endpoints map[] (1.418099614s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:15:20.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-l4kdm" for this suite. Feb 23 11:15:50.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:15:50.357: INFO: namespace: e2e-tests-services-l4kdm, resource: bindings, ignored listing per whitelist Feb 23 11:15:50.567: INFO: namespace e2e-tests-services-l4kdm deletion completed in 30.327434878s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:54.331 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:15:50.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-d94a9247-562d-11ea-8363-0242ac110008 STEP: Creating a pod to test consume secrets Feb 23 11:15:50.808: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-clkqk" to be "success or failure" Feb 23 11:15:51.004: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 196.040128ms Feb 23 11:15:53.019: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211337523s Feb 23 11:15:55.027: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218862301s Feb 23 11:15:57.126: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317794449s Feb 23 11:15:59.393: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585612707s Feb 23 11:16:01.409: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.601696402s Feb 23 11:16:03.426: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.618363294s STEP: Saw pod success Feb 23 11:16:03.426: INFO: Pod "pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:16:03.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 23 11:16:03.633: INFO: Waiting for pod pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008 to disappear Feb 23 11:16:03.645: INFO: Pod pod-projected-secrets-d953434d-562d-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:16:03.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-clkqk" for this suite. Feb 23 11:16:11.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:16:11.806: INFO: namespace: e2e-tests-projected-clkqk, resource: bindings, ignored listing per whitelist Feb 23 11:16:11.883: INFO: namespace e2e-tests-projected-clkqk deletion completed in 8.22636578s • [SLOW TEST:21.316 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:16:11.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e60a1c84-562d-11ea-8363-0242ac110008 STEP: Creating a pod to test consume secrets Feb 23 11:16:12.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-fgzvf" to be "success or failure" Feb 23 11:16:12.189: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.012499ms Feb 23 11:16:14.205: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031600216s Feb 23 11:16:16.236: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063229238s Feb 23 11:16:18.457: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283774205s Feb 23 11:16:20.489: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.316203974s Feb 23 11:16:22.528: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.355115271s STEP: Saw pod success Feb 23 11:16:22.529: INFO: Pod "pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:16:22.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 23 11:16:23.492: INFO: Waiting for pod pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008 to disappear Feb 23 11:16:23.793: INFO: Pod pod-projected-secrets-e60b7e7e-562d-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:16:23.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fgzvf" for this suite. Feb 23 11:16:29.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:16:29.940: INFO: namespace: e2e-tests-projected-fgzvf, resource: bindings, ignored listing per whitelist Feb 23 11:16:30.084: INFO: namespace e2e-tests-projected-fgzvf deletion completed in 6.275445955s • [SLOW TEST:18.201 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:16:30.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f0dddb4e-562d-11ea-8363-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 23 11:16:30.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-7mbl7" to be "success or failure" Feb 23 11:16:30.323: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.664684ms Feb 23 11:16:32.452: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138592421s Feb 23 11:16:34.464: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150643703s Feb 23 11:16:36.807: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492865244s Feb 23 11:16:38.935: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.620991637s Feb 23 11:16:41.121: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.807121914s STEP: Saw pod success Feb 23 11:16:41.121: INFO: Pod "pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:16:41.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 23 11:16:41.295: INFO: Waiting for pod pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008 to disappear Feb 23 11:16:41.311: INFO: Pod pod-configmaps-f0df3f0a-562d-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:16:41.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7mbl7" for this suite. Feb 23 11:16:47.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:16:47.478: INFO: namespace: e2e-tests-configmap-7mbl7, resource: bindings, ignored listing per whitelist Feb 23 11:16:47.602: INFO: namespace e2e-tests-configmap-7mbl7 deletion completed in 6.279227119s • [SLOW TEST:17.517 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:16:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fb4d33d5-562d-11ea-8363-0242ac110008 STEP: Creating a pod to test consume secrets Feb 23 11:16:47.957: INFO: Waiting up to 5m0s for pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-7jcpk" to be "success or failure" Feb 23 11:16:47.964: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690741ms Feb 23 11:16:50.083: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12496975s Feb 23 11:16:52.101: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143897578s Feb 23 11:16:54.586: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62892344s Feb 23 11:16:56.620: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662459289s Feb 23 11:16:58.706: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.748073133s STEP: Saw pod success Feb 23 11:16:58.706: INFO: Pod "pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:16:58.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 23 11:16:59.538: INFO: Waiting for pod pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008 to disappear Feb 23 11:16:59.574: INFO: Pod pod-secrets-fb4edcb3-562d-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:16:59.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7jcpk" for this suite. Feb 23 11:17:05.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:17:05.750: INFO: namespace: e2e-tests-secrets-7jcpk, resource: bindings, ignored listing per whitelist Feb 23 11:17:05.818: INFO: namespace e2e-tests-secrets-7jcpk deletion completed in 6.23310467s • [SLOW TEST:18.215 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:17:05.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-78k78;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-78k78;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-78k78.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.189.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.189.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.189.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.189.212_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-78k78;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-78k78;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-78k78.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-78k78.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-78k78.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.189.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.189.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.189.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.189.212_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 23 11:17:21.222: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.228: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.232: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-78k78 from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.241: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-78k78 from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.254: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.263: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.271: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.276: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.281: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.285: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.294: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.302: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.309: INFO: Unable to read 10.98.189.212_udp@PTR from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.313: INFO: Unable to read 10.98.189.212_tcp@PTR from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.320: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.338: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.345: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-78k78 from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.350: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-78k78 from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.354: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.366: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.382: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.400: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.409: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.413: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.417: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.424: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.428: INFO: Unable to read 10.98.189.212_udp@PTR from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.432: INFO: Unable to read 10.98.189.212_tcp@PTR from pod e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008: the server could not find the requested resource (get pods dns-test-06949673-562e-11ea-8363-0242ac110008) Feb 23 11:17:21.432: INFO: Lookups using e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-78k78 wheezy_tcp@dns-test-service.e2e-tests-dns-78k78 wheezy_udp@dns-test-service.e2e-tests-dns-78k78.svc wheezy_tcp@dns-test-service.e2e-tests-dns-78k78.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.98.189.212_udp@PTR 10.98.189.212_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-78k78 jessie_tcp@dns-test-service.e2e-tests-dns-78k78 jessie_udp@dns-test-service.e2e-tests-dns-78k78.svc jessie_tcp@dns-test-service.e2e-tests-dns-78k78.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-78k78.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-78k78.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.98.189.212_udp@PTR 10.98.189.212_tcp@PTR] Feb 23 11:17:27.001: INFO: DNS probes using e2e-tests-dns-78k78/dns-test-06949673-562e-11ea-8363-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:17:27.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-78k78" for this suite. Feb 23 11:17:33.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:17:33.835: INFO: namespace: e2e-tests-dns-78k78, resource: bindings, ignored listing per whitelist Feb 23 11:17:33.924: INFO: namespace e2e-tests-dns-78k78 deletion completed in 6.325512857s • [SLOW TEST:28.106 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:17:33.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 11:17:34.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-8x6v8" to be "success or failure" Feb 23 11:17:34.327: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.628328ms Feb 23 11:17:36.358: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043307201s Feb 23 11:17:38.374: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058711488s Feb 23 11:17:40.406: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091055602s Feb 23 11:17:42.425: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110009518s Feb 23 11:17:44.444: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128807686s STEP: Saw pod success Feb 23 11:17:44.444: INFO: Pod "downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:17:44.457: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 11:17:44.743: INFO: Waiting for pod downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008 to disappear Feb 23 11:17:44.751: INFO: Pod downwardapi-volume-16f92b79-562e-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:17:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8x6v8" for this suite. Feb 23 11:17:50.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:17:50.977: INFO: namespace: e2e-tests-downward-api-8x6v8, resource: bindings, ignored listing per whitelist Feb 23 11:17:50.995: INFO: namespace e2e-tests-downward-api-8x6v8 deletion completed in 6.237147515s • [SLOW TEST:17.070 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:17:50.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 11:17:51.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-qgblp" to be "success or failure" Feb 23 11:17:51.190: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090585ms Feb 23 11:17:53.204: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023655756s Feb 23 11:17:55.222: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041438006s Feb 23 11:17:57.289: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108769714s Feb 23 11:17:59.301: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120887178s Feb 23 11:18:01.319: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139114752s STEP: Saw pod success Feb 23 11:18:01.319: INFO: Pod "downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:18:01.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 11:18:01.714: INFO: Waiting for pod downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008 to disappear Feb 23 11:18:02.235: INFO: Pod downwardapi-volume-2114c865-562e-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:18:02.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qgblp" for this suite. Feb 23 11:18:08.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:18:08.494: INFO: namespace: e2e-tests-downward-api-qgblp, resource: bindings, ignored listing per whitelist Feb 23 11:18:08.594: INFO: namespace e2e-tests-downward-api-qgblp deletion completed in 6.332339851s • [SLOW TEST:17.599 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:18:08.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:18:08.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 23 11:18:09.184: INFO: stderr: "" Feb 23 11:18:09.185: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:18:09.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mwmd5" for this suite. Feb 23 11:18:15.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:18:15.397: INFO: namespace: e2e-tests-kubectl-mwmd5, resource: bindings, ignored listing per whitelist Feb 23 11:18:15.432: INFO: namespace e2e-tests-kubectl-mwmd5 deletion completed in 6.219218538s • [SLOW TEST:6.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:18:15.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-2fa83111-562e-11ea-8363-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-2fa83174-562e-11ea-8363-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2fa83111-562e-11ea-8363-0242ac110008 STEP: Updating configmap cm-test-opt-upd-2fa83174-562e-11ea-8363-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-2fa831d9-562e-11ea-8363-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:20:00.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2tdc4" for this suite. Feb 23 11:20:24.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:20:24.343: INFO: namespace: e2e-tests-projected-2tdc4, resource: bindings, ignored listing per whitelist Feb 23 11:20:24.389: INFO: namespace e2e-tests-projected-2tdc4 deletion completed in 24.229524629s • [SLOW TEST:128.956 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:20:24.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-264mz Feb 23 11:20:34.708: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-264mz STEP: checking the pod's current state and verifying that restartCount is present Feb 23 11:20:34.714: INFO: Initial restart count of pod liveness-http is 0 Feb 23 11:21:01.070: INFO: Restart count of pod e2e-tests-container-probe-264mz/liveness-http is now 1 (26.35688564s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:21:01.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-264mz" for this suite. Feb 23 11:21:07.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:21:07.370: INFO: namespace: e2e-tests-container-probe-264mz, resource: bindings, ignored listing per whitelist Feb 23 11:21:07.417: INFO: namespace e2e-tests-container-probe-264mz deletion completed in 6.26230009s • [SLOW TEST:43.027 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:21:07.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0223 11:21:49.903204 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 23 11:21:49.903: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:21:49.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kw8g7" for this suite. Feb 23 11:22:06.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:22:06.354: INFO: namespace: e2e-tests-gc-kw8g7, resource: bindings, ignored listing per whitelist Feb 23 11:22:07.017: INFO: namespace e2e-tests-gc-kw8g7 deletion completed in 17.105307399s • [SLOW TEST:59.600 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:22:07.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 23 11:22:22.026: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ba0f6319-562e-11ea-8363-0242ac110008,GenerateName:,Namespace:e2e-tests-events-rjwzw,SelfLink:/api/v1/namespaces/e2e-tests-events-rjwzw/pods/send-events-ba0f6319-562e-11ea-8363-0242ac110008,UID:ba149f71-562e-11ea-a994-fa163e34d433,ResourceVersion:22638011,Generation:0,CreationTimestamp:2020-02-23 11:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 827650670,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lb2wd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb2wd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-lb2wd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00240cdf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00240ce10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:22:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:22:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-23 11:22:08 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-23 11:22:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d750eda794d8eecb4f2617b7a41ca6fae2418969d4cbc212a1bc241dc6616999}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 23 11:22:24.037: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 23 11:22:26.057: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:22:26.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-rjwzw" for this suite. Feb 23 11:23:16.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:23:16.397: INFO: namespace: e2e-tests-events-rjwzw, resource: bindings, ignored listing per whitelist Feb 23 11:23:16.417: INFO: namespace e2e-tests-events-rjwzw deletion completed in 50.288495441s • [SLOW TEST:69.400 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:23:16.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:24:20.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-4kl6w" for this suite. Feb 23 11:24:26.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:24:26.799: INFO: namespace: e2e-tests-container-runtime-4kl6w, resource: bindings, ignored listing per whitelist Feb 23 11:24:26.812: INFO: namespace e2e-tests-container-runtime-4kl6w deletion completed in 6.251316515s • [SLOW TEST:70.394 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:24:26.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 23 11:24:27.115: INFO: Waiting up to 5m0s for pod "pod-0d1278f4-562f-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-87gmb" to be "success or failure" Feb 23 11:24:27.140: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.811055ms Feb 23 11:24:29.453: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338116877s Feb 23 11:24:31.477: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362284697s Feb 23 11:24:33.524: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408482719s Feb 23 11:24:35.566: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450951254s Feb 23 11:24:37.584: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468698324s STEP: Saw pod success Feb 23 11:24:37.584: INFO: Pod "pod-0d1278f4-562f-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:24:37.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0d1278f4-562f-11ea-8363-0242ac110008 container test-container: STEP: delete the pod Feb 23 11:24:38.059: INFO: Waiting for pod pod-0d1278f4-562f-11ea-8363-0242ac110008 to disappear Feb 23 11:24:38.356: INFO: Pod pod-0d1278f4-562f-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:24:38.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-87gmb" for this suite. Feb 23 11:24:46.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:24:46.653: INFO: namespace: e2e-tests-emptydir-87gmb, resource: bindings, ignored listing per whitelist Feb 23 11:24:46.758: INFO: namespace e2e-tests-emptydir-87gmb deletion completed in 8.380518836s • [SLOW TEST:19.946 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:24:46.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sq74p STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 23 11:24:47.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 23 11:25:23.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sq74p PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 11:25:23.307: INFO: >>> kubeConfig: /root/.kube/config I0223 11:25:23.399716 8 log.go:172] (0xc0000fcd10) (0xc001cae460) Create stream I0223 11:25:23.399803 8 log.go:172] (0xc0000fcd10) (0xc001cae460) Stream added, broadcasting: 1 I0223 11:25:23.405946 8 log.go:172] (0xc0000fcd10) Reply frame received for 1 I0223 11:25:23.405996 8 log.go:172] (0xc0000fcd10) (0xc001a62000) Create stream I0223 11:25:23.406016 8 log.go:172] (0xc0000fcd10) (0xc001a62000) Stream added, broadcasting: 3 I0223 11:25:23.407587 8 log.go:172] (0xc0000fcd10) Reply frame received for 3 I0223 11:25:23.407619 8 log.go:172] (0xc0000fcd10) (0xc001cae500) Create stream I0223 11:25:23.407631 8 log.go:172] (0xc0000fcd10) (0xc001cae500) Stream added, broadcasting: 5 I0223 11:25:23.409035 8 log.go:172] (0xc0000fcd10) Reply frame received for 5 I0223 11:25:23.678876 8 log.go:172] (0xc0000fcd10) Data frame received for 3 I0223 11:25:23.679002 8 log.go:172] (0xc001a62000) (3) Data frame handling I0223 11:25:23.679031 8 log.go:172] (0xc001a62000) (3) Data frame sent I0223 11:25:23.841690 8 log.go:172] (0xc0000fcd10) (0xc001a62000) Stream removed, broadcasting: 3 I0223 11:25:23.841901 8 log.go:172] (0xc0000fcd10) Data frame received for 1 I0223 11:25:23.841925 8 log.go:172] (0xc001cae460) (1) Data frame handling I0223 11:25:23.841962 8 log.go:172] (0xc001cae460) (1) Data frame sent I0223 11:25:23.841984 8 log.go:172] (0xc0000fcd10) (0xc001cae460) Stream removed, broadcasting: 1 I0223 11:25:23.842062 8 log.go:172] (0xc0000fcd10) (0xc001cae500) Stream removed, broadcasting: 5 I0223 11:25:23.842231 8 log.go:172] (0xc0000fcd10) Go away received I0223 11:25:23.842337 8 log.go:172] (0xc0000fcd10) (0xc001cae460) Stream removed, broadcasting: 1 I0223 11:25:23.842357 8 log.go:172] (0xc0000fcd10) (0xc001a62000) Stream removed, broadcasting: 3 I0223 11:25:23.842376 8 log.go:172] (0xc0000fcd10) (0xc001cae500) Stream removed, broadcasting: 5 Feb 23 11:25:23.842: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:25:23.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-sq74p" for this suite. Feb 23 11:25:47.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:25:48.061: INFO: namespace: e2e-tests-pod-network-test-sq74p, resource: bindings, ignored listing per whitelist Feb 23 11:25:48.142: INFO: namespace e2e-tests-pod-network-test-sq74p deletion completed in 24.264539697s • [SLOW TEST:61.384 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:25:48.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 23 11:25:48.601: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-knt4p" to be "success or failure" Feb 23 11:25:48.612: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.559843ms Feb 23 11:25:50.755: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153802953s Feb 23 11:25:52.780: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178524141s Feb 23 11:25:55.244: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642137454s Feb 23 11:25:57.267: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.665077179s Feb 23 11:25:59.277: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.675705935s Feb 23 11:26:01.287: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.685703169s Feb 23 11:26:03.308: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.706659125s STEP: Saw pod success Feb 23 11:26:03.308: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 23 11:26:03.316: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 23 11:26:04.249: INFO: Waiting for pod pod-host-path-test to disappear Feb 23 11:26:04.507: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:26:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-knt4p" for this suite. Feb 23 11:26:10.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:26:10.863: INFO: namespace: e2e-tests-hostpath-knt4p, resource: bindings, ignored listing per whitelist Feb 23 11:26:10.943: INFO: namespace e2e-tests-hostpath-knt4p deletion completed in 6.3413088s • [SLOW TEST:22.800 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:26:10.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 23 11:26:11.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz659' Feb 23 11:26:13.527: INFO: stderr: "" Feb 23 11:26:13.527: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 23 11:26:14.546: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:14.547: INFO: Found 0 / 1 Feb 23 11:26:15.570: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:15.571: INFO: Found 0 / 1 Feb 23 11:26:16.609: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:16.609: INFO: Found 0 / 1 Feb 23 11:26:17.540: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:17.540: INFO: Found 0 / 1 Feb 23 11:26:18.566: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:18.566: INFO: Found 0 / 1 Feb 23 11:26:19.543: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:19.543: INFO: Found 0 / 1 Feb 23 11:26:20.605: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:20.606: INFO: Found 0 / 1 Feb 23 11:26:21.539: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:21.539: INFO: Found 0 / 1 Feb 23 11:26:22.556: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:22.556: INFO: Found 0 / 1 Feb 23 11:26:23.549: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:23.549: INFO: Found 1 / 1 Feb 23 11:26:23.549: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 23 11:26:23.558: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:23.559: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 23 11:26:23.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7xr7b --namespace=e2e-tests-kubectl-nz659 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 23 11:26:23.826: INFO: stderr: "" Feb 23 11:26:23.826: INFO: stdout: "pod/redis-master-7xr7b patched\n" STEP: checking annotations Feb 23 11:26:23.842: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:26:23.842: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:26:23.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nz659" for this suite. Feb 23 11:26:47.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:26:48.014: INFO: namespace: e2e-tests-kubectl-nz659, resource: bindings, ignored listing per whitelist Feb 23 11:26:48.142: INFO: namespace e2e-tests-kubectl-nz659 deletion completed in 24.242220933s • [SLOW TEST:37.199 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:26:48.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 23 11:26:48.370: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-xspzh" to be "success or failure" Feb 23 11:26:48.411: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.566553ms Feb 23 11:26:50.903: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533034512s Feb 23 11:26:52.958: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587534332s Feb 23 11:26:54.978: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607464736s Feb 23 11:26:57.344: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.973526553s Feb 23 11:26:59.360: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.990098197s STEP: Saw pod success Feb 23 11:26:59.360: INFO: Pod "downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:26:59.374: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008 container client-container: STEP: delete the pod Feb 23 11:26:59.572: INFO: Waiting for pod downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008 to disappear Feb 23 11:26:59.614: INFO: Pod downwardapi-volume-61407efa-562f-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:26:59.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xspzh" for this suite. Feb 23 11:27:05.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:27:05.801: INFO: namespace: e2e-tests-projected-xspzh, resource: bindings, ignored listing per whitelist Feb 23 11:27:05.866: INFO: namespace e2e-tests-projected-xspzh deletion completed in 6.246858076s • [SLOW TEST:17.723 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:27:05.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:27:06.121: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 23 11:27:11.174: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 23 11:27:13.191: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 23 11:27:15.205: INFO: Creating deployment "test-rollover-deployment" Feb 23 11:27:15.240: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 23 11:27:17.260: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 23 11:27:17.466: INFO: Ensure that both replica sets have 1 created replica Feb 23 11:27:17.479: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 23 11:27:17.495: INFO: Updating deployment test-rollover-deployment Feb 23 11:27:17.495: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 23 11:27:19.557: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 23 11:27:19.572: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 23 11:27:19.582: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:19.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:21.657: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:21.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:23.684: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:23.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:25.617: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:25.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:27.605: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:27.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:29.605: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:29.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:31.610: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:31.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054050, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:33.623: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:33.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054050, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:35.606: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:35.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054050, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:37.618: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:37.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054050, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:39.605: INFO: all replica sets need to contain the pod-template-hash label Feb 23 11:27:39.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054050, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054035, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:27:41.600: INFO: Feb 23 11:27:41.600: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 23 11:27:41.610: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-g2hgk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2hgk/deployments/test-rollover-deployment,UID:7146a30a-562f-11ea-a994-fa163e34d433,ResourceVersion:22638705,Generation:2,CreationTimestamp:2020-02-23 11:27:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-23 11:27:15 +0000 UTC 2020-02-23 11:27:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-23 11:27:41 +0000 UTC 2020-02-23 11:27:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 23 11:27:41.615: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-g2hgk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2hgk/replicasets/test-rollover-deployment-5b8479fdb6,UID:72a4df18-562f-11ea-a994-fa163e34d433,ResourceVersion:22638695,Generation:2,CreationTimestamp:2020-02-23 11:27:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7146a30a-562f-11ea-a994-fa163e34d433 0xc0020de1a7 0xc0020de1a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 23 11:27:41.615: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 23 11:27:41.615: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-g2hgk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2hgk/replicasets/test-rollover-controller,UID:6bd1f5d6-562f-11ea-a994-fa163e34d433,ResourceVersion:22638704,Generation:2,CreationTimestamp:2020-02-23 11:27:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7146a30a-562f-11ea-a994-fa163e34d433 0xc001771f37 0xc001771f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:27:41.615: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-g2hgk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2hgk/replicasets/test-rollover-deployment-58494b7559,UID:71503969-562f-11ea-a994-fa163e34d433,ResourceVersion:22638660,Generation:2,CreationTimestamp:2020-02-23 11:27:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7146a30a-562f-11ea-a994-fa163e34d433 0xc0020de0d7 0xc0020de0d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:27:41.622: INFO: Pod "test-rollover-deployment-5b8479fdb6-5q7f5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-5q7f5,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-g2hgk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g2hgk/pods/test-rollover-deployment-5b8479fdb6-5q7f5,UID:730a2c0a-562f-11ea-a994-fa163e34d433,ResourceVersion:22638680,Generation:0,CreationTimestamp:2020-02-23 11:27:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 72a4df18-562f-11ea-a994-fa163e34d433 0xc001d944b7 0xc001d944b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pzqfx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pzqfx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pzqfx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d94860} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d94880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:27:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:27:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:27:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:27:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-23 11:27:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-23 11:27:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0210bf68d15bc6bfcce13e48721fb0e5fe7ca92af675754e2373053f89ff6687}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:27:41.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-g2hgk" for this suite. Feb 23 11:27:50.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:27:50.384: INFO: namespace: e2e-tests-deployment-g2hgk, resource: bindings, ignored listing per whitelist Feb 23 11:27:50.505: INFO: namespace e2e-tests-deployment-g2hgk deletion completed in 8.878197597s • [SLOW TEST:44.639 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:27:50.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 23 11:27:50.816: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 23 11:27:50.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:51.225: INFO: stderr: "" Feb 23 11:27:51.225: INFO: stdout: "service/redis-slave created\n" Feb 23 11:27:51.226: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 23 11:27:51.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:51.661: INFO: stderr: "" Feb 23 11:27:51.661: INFO: stdout: "service/redis-master created\n" Feb 23 11:27:51.662: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 23 11:27:51.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:52.113: INFO: stderr: "" Feb 23 11:27:52.113: INFO: stdout: "service/frontend created\n" Feb 23 11:27:52.114: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 23 11:27:52.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:52.435: INFO: stderr: "" Feb 23 11:27:52.436: INFO: stdout: "deployment.extensions/frontend created\n" Feb 23 11:27:52.436: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 23 11:27:52.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:52.967: INFO: stderr: "" Feb 23 11:27:52.967: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 23 11:27:52.968: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 23 11:27:52.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:27:53.484: INFO: stderr: "" Feb 23 11:27:53.484: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 23 11:27:53.484: INFO: Waiting for all frontend pods to be Running. Feb 23 11:28:18.536: INFO: Waiting for frontend to serve content. Feb 23 11:28:18.599: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: Feb 23 11:28:23.720: INFO: Trying to add a new entry to the guestbook. Feb 23 11:28:23.757: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 23 11:28:23.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:24.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:24.103: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 23 11:28:24.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:24.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:24.607: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 23 11:28:24.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:24.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:24.945: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 23 11:28:24.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:25.098: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:25.098: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 23 11:28:25.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:25.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:25.288: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 23 11:28:25.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bppct' Feb 23 11:28:25.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 11:28:25.552: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:28:25.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bppct" for this suite. Feb 23 11:29:13.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:29:13.723: INFO: namespace: e2e-tests-kubectl-bppct, resource: bindings, ignored listing per whitelist Feb 23 11:29:13.832: INFO: namespace e2e-tests-kubectl-bppct deletion completed in 48.272895133s • [SLOW TEST:83.326 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:29:13.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 23 11:29:14.147: INFO: Waiting up to 5m0s for pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-c6wrm" to be "success or failure" Feb 23 11:29:14.224: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 76.587903ms Feb 23 11:29:16.250: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102399051s Feb 23 11:29:18.264: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116487419s Feb 23 11:29:20.330: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182536423s Feb 23 11:29:22.354: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207019713s Feb 23 11:29:24.368: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220355286s STEP: Saw pod success Feb 23 11:29:24.368: INFO: Pod "downward-api-b82466b8-562f-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:29:24.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b82466b8-562f-11ea-8363-0242ac110008 container dapi-container: STEP: delete the pod Feb 23 11:29:24.474: INFO: Waiting for pod downward-api-b82466b8-562f-11ea-8363-0242ac110008 to disappear Feb 23 11:29:24.489: INFO: Pod downward-api-b82466b8-562f-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:29:24.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c6wrm" for this suite. Feb 23 11:29:30.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:29:30.618: INFO: namespace: e2e-tests-downward-api-c6wrm, resource: bindings, ignored listing per whitelist Feb 23 11:29:30.724: INFO: namespace e2e-tests-downward-api-c6wrm deletion completed in 6.22614692s • [SLOW TEST:16.892 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:29:30.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 23 11:29:30.902: INFO: namespace e2e-tests-kubectl-fb4gf Feb 23 11:29:30.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fb4gf' Feb 23 11:29:31.251: INFO: stderr: "" Feb 23 11:29:31.251: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 23 11:29:32.267: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:32.267: INFO: Found 0 / 1 Feb 23 11:29:33.849: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:33.849: INFO: Found 0 / 1 Feb 23 11:29:34.270: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:34.270: INFO: Found 0 / 1 Feb 23 11:29:35.285: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:35.285: INFO: Found 0 / 1 Feb 23 11:29:37.171: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:37.172: INFO: Found 0 / 1 Feb 23 11:29:37.688: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:37.688: INFO: Found 0 / 1 Feb 23 11:29:38.264: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:38.264: INFO: Found 0 / 1 Feb 23 11:29:39.259: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:39.259: INFO: Found 0 / 1 Feb 23 11:29:40.266: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:40.266: INFO: Found 0 / 1 Feb 23 11:29:41.264: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:41.264: INFO: Found 1 / 1 Feb 23 11:29:41.264: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 23 11:29:41.279: INFO: Selector matched 1 pods for map[app:redis] Feb 23 11:29:41.279: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 23 11:29:41.279: INFO: wait on redis-master startup in e2e-tests-kubectl-fb4gf Feb 23 11:29:41.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pbf69 redis-master --namespace=e2e-tests-kubectl-fb4gf' Feb 23 11:29:41.471: INFO: stderr: "" Feb 23 11:29:41.471: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Feb 11:29:39.597 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Feb 11:29:39.598 # Server started, Redis version 3.2.12\n1:M 23 Feb 11:29:39.598 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Feb 11:29:39.598 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 23 11:29:41.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-fb4gf' Feb 23 11:29:41.675: INFO: stderr: "" Feb 23 11:29:41.675: INFO: stdout: "service/rm2 exposed\n" Feb 23 11:29:41.713: INFO: Service rm2 in namespace e2e-tests-kubectl-fb4gf found. STEP: exposing service Feb 23 11:29:43.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-fb4gf' Feb 23 11:29:44.050: INFO: stderr: "" Feb 23 11:29:44.050: INFO: stdout: "service/rm3 exposed\n" Feb 23 11:29:44.140: INFO: Service rm3 in namespace e2e-tests-kubectl-fb4gf found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:29:46.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fb4gf" for this suite. Feb 23 11:30:12.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:30:12.406: INFO: namespace: e2e-tests-kubectl-fb4gf, resource: bindings, ignored listing per whitelist Feb 23 11:30:12.439: INFO: namespace e2e-tests-kubectl-fb4gf deletion completed in 26.262937878s • [SLOW TEST:41.715 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:30:12.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 23 11:30:12.715: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:30:32.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-gz5fc" for this suite. Feb 23 11:30:38.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:30:38.750: INFO: namespace: e2e-tests-init-container-gz5fc, resource: bindings, ignored listing per whitelist Feb 23 11:30:38.778: INFO: namespace e2e-tests-init-container-gz5fc deletion completed in 6.371928863s • [SLOW TEST:26.337 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:30:38.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:30:38.997: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:30:40.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-h5q22" for this suite. Feb 23 11:30:46.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:30:46.691: INFO: namespace: e2e-tests-custom-resource-definition-h5q22, resource: bindings, ignored listing per whitelist Feb 23 11:30:46.729: INFO: namespace e2e-tests-custom-resource-definition-h5q22 deletion completed in 6.229781926s • [SLOW TEST:7.950 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:30:46.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 23 11:30:46.930: INFO: Waiting up to 5m0s for pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008" in namespace "e2e-tests-var-expansion-t62ql" to be "success or failure" Feb 23 11:30:46.948: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.692918ms Feb 23 11:30:49.165: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235734411s Feb 23 11:30:51.177: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247085241s Feb 23 11:30:53.321: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391786997s Feb 23 11:30:55.332: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402176305s Feb 23 11:30:57.346: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.416446504s STEP: Saw pod success Feb 23 11:30:57.346: INFO: Pod "var-expansion-ef76b74e-562f-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:30:57.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ef76b74e-562f-11ea-8363-0242ac110008 container dapi-container: STEP: delete the pod Feb 23 11:30:58.299: INFO: Waiting for pod var-expansion-ef76b74e-562f-11ea-8363-0242ac110008 to disappear Feb 23 11:30:58.323: INFO: Pod var-expansion-ef76b74e-562f-11ea-8363-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:30:58.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-t62ql" for this suite. Feb 23 11:31:04.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:31:04.633: INFO: namespace: e2e-tests-var-expansion-t62ql, resource: bindings, ignored listing per whitelist Feb 23 11:31:04.661: INFO: namespace e2e-tests-var-expansion-t62ql deletion completed in 6.329892327s • [SLOW TEST:17.932 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:31:04.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:31:17.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-l9jh8" for this suite. Feb 23 11:31:23.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:31:23.512: INFO: namespace: e2e-tests-kubelet-test-l9jh8, resource: bindings, ignored listing per whitelist Feb 23 11:31:23.726: INFO: namespace e2e-tests-kubelet-test-l9jh8 deletion completed in 6.479092987s • [SLOW TEST:19.065 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:31:23.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0223 11:31:34.083556 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 23 11:31:34.083: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:31:34.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7bmhr" for this suite. Feb 23 11:31:40.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:31:40.229: INFO: namespace: e2e-tests-gc-7bmhr, resource: bindings, ignored listing per whitelist Feb 23 11:31:40.306: INFO: namespace e2e-tests-gc-7bmhr deletion completed in 6.21343534s • [SLOW TEST:16.580 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:31:40.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zbjtz STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 23 11:31:40.740: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 23 11:32:17.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-zbjtz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 11:32:17.209: INFO: >>> kubeConfig: /root/.kube/config I0223 11:32:17.357736 8 log.go:172] (0xc000c56420) (0xc00203a460) Create stream I0223 11:32:17.358065 8 log.go:172] (0xc000c56420) (0xc00203a460) Stream added, broadcasting: 1 I0223 11:32:17.424686 8 log.go:172] (0xc000c56420) Reply frame received for 1 I0223 11:32:17.424914 8 log.go:172] (0xc000c56420) (0xc001a06dc0) Create stream I0223 11:32:17.424943 8 log.go:172] (0xc000c56420) (0xc001a06dc0) Stream added, broadcasting: 3 I0223 11:32:17.428168 8 log.go:172] (0xc000c56420) Reply frame received for 3 I0223 11:32:17.428246 8 log.go:172] (0xc000c56420) (0xc00203a500) Create stream I0223 11:32:17.428259 8 log.go:172] (0xc000c56420) (0xc00203a500) Stream added, broadcasting: 5 I0223 11:32:17.432283 8 log.go:172] (0xc000c56420) Reply frame received for 5 I0223 11:32:17.677590 8 log.go:172] (0xc000c56420) Data frame received for 3 I0223 11:32:17.677652 8 log.go:172] (0xc001a06dc0) (3) Data frame handling I0223 11:32:17.677673 8 log.go:172] (0xc001a06dc0) (3) Data frame sent I0223 11:32:17.832626 8 log.go:172] (0xc000c56420) Data frame received for 1 I0223 11:32:17.832693 8 log.go:172] (0xc00203a460) (1) Data frame handling I0223 11:32:17.832734 8 log.go:172] (0xc00203a460) (1) Data frame sent I0223 11:32:17.832773 8 log.go:172] (0xc000c56420) (0xc00203a460) Stream removed, broadcasting: 1 I0223 11:32:17.833652 8 log.go:172] (0xc000c56420) (0xc001a06dc0) Stream removed, broadcasting: 3 I0223 11:32:17.833737 8 log.go:172] (0xc000c56420) (0xc00203a500) Stream removed, broadcasting: 5 I0223 11:32:17.833860 8 log.go:172] (0xc000c56420) (0xc00203a460) Stream removed, broadcasting: 1 I0223 11:32:17.833885 8 log.go:172] (0xc000c56420) (0xc001a06dc0) Stream removed, broadcasting: 3 I0223 11:32:17.833894 8 log.go:172] (0xc000c56420) (0xc00203a500) Stream removed, broadcasting: 5 Feb 23 11:32:17.834: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:32:17.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0223 11:32:17.834845 8 log.go:172] (0xc000c56420) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-zbjtz" for this suite. Feb 23 11:32:41.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:32:42.031: INFO: namespace: e2e-tests-pod-network-test-zbjtz, resource: bindings, ignored listing per whitelist Feb 23 11:32:42.083: INFO: namespace e2e-tests-pod-network-test-zbjtz deletion completed in 24.206984893s • [SLOW TEST:61.776 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:32:42.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-344ad208-5630-11ea-8363-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 23 11:32:42.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-4q5zj" to be "success or failure" Feb 23 11:32:42.457: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.074382ms Feb 23 11:32:44.526: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104784418s Feb 23 11:32:46.563: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142087671s Feb 23 11:32:48.993: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.571849601s Feb 23 11:32:51.063: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.642246914s Feb 23 11:32:53.085: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664092453s Feb 23 11:32:55.097: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.675984869s STEP: Saw pod success Feb 23 11:32:55.097: INFO: Pod "pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:32:55.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 23 11:32:55.241: INFO: Waiting for pod pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008 to disappear Feb 23 11:32:55.261: INFO: Pod pod-configmaps-344c2f7b-5630-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:32:55.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4q5zj" for this suite. Feb 23 11:33:02.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:33:02.160: INFO: namespace: e2e-tests-configmap-4q5zj, resource: bindings, ignored listing per whitelist Feb 23 11:33:02.175: INFO: namespace e2e-tests-configmap-4q5zj deletion completed in 6.896876977s • [SLOW TEST:20.092 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:33:02.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 23 11:33:02.407: INFO: Waiting up to 5m0s for pod "pod-4036881a-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-zk2sd" to be "success or failure" Feb 23 11:33:02.572: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 164.635546ms Feb 23 11:33:04.642: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234021934s Feb 23 11:33:06.698: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290234012s Feb 23 11:33:08.713: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305704552s Feb 23 11:33:10.771: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363942439s Feb 23 11:33:12.782: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37404786s STEP: Saw pod success Feb 23 11:33:12.782: INFO: Pod "pod-4036881a-5630-11ea-8363-0242ac110008" satisfied condition "success or failure" Feb 23 11:33:12.787: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4036881a-5630-11ea-8363-0242ac110008 container test-container: STEP: delete the pod Feb 23 11:33:13.500: INFO: Waiting for pod pod-4036881a-5630-11ea-8363-0242ac110008 to disappear Feb 23 11:33:13.889: INFO: Pod pod-4036881a-5630-11ea-8363-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:33:13.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zk2sd" for this suite. Feb 23 11:33:19.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:33:19.990: INFO: namespace: e2e-tests-emptydir-zk2sd, resource: bindings, ignored listing per whitelist Feb 23 11:33:20.105: INFO: namespace e2e-tests-emptydir-zk2sd deletion completed in 6.199025421s • [SLOW TEST:17.930 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:33:20.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:33:20.327: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 23 11:33:20.384: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 23 11:33:26.618: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 23 11:33:29.206: INFO: Creating deployment "test-rolling-update-deployment" Feb 23 11:33:29.399: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 23 11:33:29.424: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 23 11:33:33.803: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 23 11:33:33.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:33:35.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:33:37.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:33:39.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718054409, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 11:33:41.984: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 23 11:33:42.019: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-brs2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brs2k/deployments/test-rolling-update-deployment,UID:50337d86-5630-11ea-a994-fa163e34d433,ResourceVersion:22639676,Generation:1,CreationTimestamp:2020-02-23 11:33:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-23 11:33:29 +0000 UTC 2020-02-23 11:33:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-23 11:33:40 +0000 UTC 2020-02-23 11:33:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 23 11:33:42.029: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-brs2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brs2k/replicasets/test-rolling-update-deployment-75db98fb4c,UID:505a0c17-5630-11ea-a994-fa163e34d433,ResourceVersion:22639665,Generation:1,CreationTimestamp:2020-02-23 11:33:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 50337d86-5630-11ea-a994-fa163e34d433 0xc001cc69e7 0xc001cc69e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 23 11:33:42.029: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 23 11:33:42.030: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-brs2k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brs2k/replicasets/test-rolling-update-controller,UID:4ae7fd65-5630-11ea-a994-fa163e34d433,ResourceVersion:22639675,Generation:2,CreationTimestamp:2020-02-23 11:33:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 50337d86-5630-11ea-a994-fa163e34d433 0xc001cc6927 0xc001cc6928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 23 11:33:42.173: INFO: Pod "test-rolling-update-deployment-75db98fb4c-764js" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-764js,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-brs2k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-brs2k/pods/test-rolling-update-deployment-75db98fb4c-764js,UID:50699a36-5630-11ea-a994-fa163e34d433,ResourceVersion:22639664,Generation:0,CreationTimestamp:2020-02-23 11:33:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 505a0c17-5630-11ea-a994-fa163e34d433 0xc001cc72c7 0xc001cc72c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngtwx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngtwx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ngtwx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cc7330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cc7350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:33:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:33:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:33:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:33:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-23 11:33:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-23 11:33:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a1c7ef49726a3596337a96d82324daa703adb2ef99c4d39e3fc7de43edbeb0b4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 23 11:33:42.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-brs2k" for this suite. Feb 23 11:33:50.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 23 11:33:50.546: INFO: namespace: e2e-tests-deployment-brs2k, resource: bindings, ignored listing per whitelist Feb 23 11:33:50.614: INFO: namespace e2e-tests-deployment-brs2k deletion completed in 8.381852074s • [SLOW TEST:30.509 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 23 11:33:50.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 23 11:33:51.818: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 64.464543ms)
Feb 23 11:33:51.965: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 147.831523ms)
Feb 23 11:33:51.978: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.519819ms)
Feb 23 11:33:51.990: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.344574ms)
Feb 23 11:33:52.001: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.118155ms)
Feb 23 11:33:52.016: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.917505ms)
Feb 23 11:33:52.033: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.388178ms)
Feb 23 11:33:52.049: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.014221ms)
Feb 23 11:33:52.059: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.172502ms)
Feb 23 11:33:52.068: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.129822ms)
Feb 23 11:33:52.074: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.359187ms)
Feb 23 11:33:52.084: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.671143ms)
Feb 23 11:33:52.094: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.455168ms)
Feb 23 11:33:52.099: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.010958ms)
Feb 23 11:33:52.104: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.104396ms)
Feb 23 11:33:52.109: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.415687ms)
Feb 23 11:33:52.113: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.25883ms)
Feb 23 11:33:52.118: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.332697ms)
Feb 23 11:33:52.124: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.328817ms)
Feb 23 11:33:52.133: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.553675ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:33:52.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-xzxcb" for this suite.
Feb 23 11:33:58.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:33:58.337: INFO: namespace: e2e-tests-proxy-xzxcb, resource: bindings, ignored listing per whitelist
Feb 23 11:33:58.413: INFO: namespace e2e-tests-proxy-xzxcb deletion completed in 6.273358344s

• [SLOW TEST:7.799 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:33:58.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 23 11:33:58.696: INFO: Waiting up to 5m0s for pod "pod-61c359d0-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-cfsht" to be "success or failure"
Feb 23 11:33:58.706: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.782848ms
Feb 23 11:34:00.717: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020704054s
Feb 23 11:34:02.751: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054909215s
Feb 23 11:34:04.853: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156610307s
Feb 23 11:34:07.263: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566876872s
Feb 23 11:34:09.279: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582455187s
STEP: Saw pod success
Feb 23 11:34:09.279: INFO: Pod "pod-61c359d0-5630-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:34:09.291: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-61c359d0-5630-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 11:34:09.589: INFO: Waiting for pod pod-61c359d0-5630-11ea-8363-0242ac110008 to disappear
Feb 23 11:34:09.601: INFO: Pod pod-61c359d0-5630-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:34:09.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cfsht" for this suite.
Feb 23 11:34:15.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:34:15.771: INFO: namespace: e2e-tests-emptydir-cfsht, resource: bindings, ignored listing per whitelist
Feb 23 11:34:15.818: INFO: namespace e2e-tests-emptydir-cfsht deletion completed in 6.20513724s

• [SLOW TEST:17.404 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:34:15.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 11:34:16.167: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.876901ms)
Feb 23 11:34:16.174: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.801547ms)
Feb 23 11:34:16.180: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.570495ms)
Feb 23 11:34:16.186: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.357278ms)
Feb 23 11:34:16.193: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.929016ms)
Feb 23 11:34:16.247: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 53.586583ms)
Feb 23 11:34:16.265: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.575165ms)
Feb 23 11:34:16.295: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.036997ms)
Feb 23 11:34:16.303: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.56314ms)
Feb 23 11:34:16.313: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.637904ms)
Feb 23 11:34:16.325: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.317231ms)
Feb 23 11:34:16.360: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.457859ms)
Feb 23 11:34:16.374: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.321905ms)
Feb 23 11:34:16.385: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.271869ms)
Feb 23 11:34:16.396: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.084764ms)
Feb 23 11:34:16.406: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.697513ms)
Feb 23 11:34:16.411: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.159196ms)
Feb 23 11:34:16.416: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.231805ms)
Feb 23 11:34:16.423: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.744536ms)
Feb 23 11:34:16.429: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.483459ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:34:16.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-fkdhb" for this suite.
Feb 23 11:34:22.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:34:22.555: INFO: namespace: e2e-tests-proxy-fkdhb, resource: bindings, ignored listing per whitelist
Feb 23 11:34:22.960: INFO: namespace e2e-tests-proxy-fkdhb deletion completed in 6.525863764s

• [SLOW TEST:7.140 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:34:22.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 23 11:34:23.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:23.485: INFO: stderr: ""
Feb 23 11:34:23.486: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 11:34:23.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:23.705: INFO: stderr: ""
Feb 23 11:34:23.705: INFO: stdout: "update-demo-nautilus-6zzjm update-demo-nautilus-d85vh "
Feb 23 11:34:23.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6zzjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:23.911: INFO: stderr: ""
Feb 23 11:34:23.911: INFO: stdout: ""
Feb 23 11:34:23.911: INFO: update-demo-nautilus-6zzjm is created but not running
Feb 23 11:34:28.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:29.345: INFO: stderr: ""
Feb 23 11:34:29.345: INFO: stdout: "update-demo-nautilus-6zzjm update-demo-nautilus-d85vh "
Feb 23 11:34:29.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6zzjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:29.549: INFO: stderr: ""
Feb 23 11:34:29.550: INFO: stdout: ""
Feb 23 11:34:29.550: INFO: update-demo-nautilus-6zzjm is created but not running
Feb 23 11:34:34.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:34.749: INFO: stderr: ""
Feb 23 11:34:34.749: INFO: stdout: "update-demo-nautilus-6zzjm update-demo-nautilus-d85vh "
Feb 23 11:34:34.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6zzjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:34.905: INFO: stderr: ""
Feb 23 11:34:34.906: INFO: stdout: ""
Feb 23 11:34:34.906: INFO: update-demo-nautilus-6zzjm is created but not running
Feb 23 11:34:39.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.114: INFO: stderr: ""
Feb 23 11:34:40.114: INFO: stdout: "update-demo-nautilus-6zzjm update-demo-nautilus-d85vh "
Feb 23 11:34:40.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6zzjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.254: INFO: stderr: ""
Feb 23 11:34:40.254: INFO: stdout: "true"
Feb 23 11:34:40.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6zzjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.498: INFO: stderr: ""
Feb 23 11:34:40.498: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 11:34:40.498: INFO: validating pod update-demo-nautilus-6zzjm
Feb 23 11:34:40.563: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 11:34:40.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 11:34:40.563: INFO: update-demo-nautilus-6zzjm is verified up and running
Feb 23 11:34:40.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d85vh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.718: INFO: stderr: ""
Feb 23 11:34:40.718: INFO: stdout: "true"
Feb 23 11:34:40.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d85vh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.828: INFO: stderr: ""
Feb 23 11:34:40.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 11:34:40.829: INFO: validating pod update-demo-nautilus-d85vh
Feb 23 11:34:40.843: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 11:34:40.843: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 11:34:40.843: INFO: update-demo-nautilus-d85vh is verified up and running
STEP: using delete to clean up resources
Feb 23 11:34:40.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:40.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 23 11:34:40.958: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 23 11:34:40.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6dm5r'
Feb 23 11:34:41.239: INFO: stderr: "No resources found.\n"
Feb 23 11:34:41.240: INFO: stdout: ""
Feb 23 11:34:41.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6dm5r -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 23 11:34:41.415: INFO: stderr: ""
Feb 23 11:34:41.415: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:34:41.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6dm5r" for this suite.
Feb 23 11:35:05.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:35:05.726: INFO: namespace: e2e-tests-kubectl-6dm5r, resource: bindings, ignored listing per whitelist
Feb 23 11:35:05.753: INFO: namespace e2e-tests-kubectl-6dm5r deletion completed in 24.324263138s

• [SLOW TEST:42.793 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:35:05.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 23 11:35:06.009: INFO: Waiting up to 5m0s for pod "pod-89e1db4f-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-n6s7z" to be "success or failure"
Feb 23 11:35:06.138: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 128.208859ms
Feb 23 11:35:08.955: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94543426s
Feb 23 11:35:11.612: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.602436672s
Feb 23 11:35:13.641: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.631572755s
Feb 23 11:35:15.908: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.898187651s
Feb 23 11:35:17.955: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.945274054s
Feb 23 11:35:19.969: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.959141456s
STEP: Saw pod success
Feb 23 11:35:19.969: INFO: Pod "pod-89e1db4f-5630-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:35:19.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-89e1db4f-5630-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 11:35:21.182: INFO: Waiting for pod pod-89e1db4f-5630-11ea-8363-0242ac110008 to disappear
Feb 23 11:35:21.200: INFO: Pod pod-89e1db4f-5630-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:35:21.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n6s7z" for this suite.
Feb 23 11:35:29.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:35:29.363: INFO: namespace: e2e-tests-emptydir-n6s7z, resource: bindings, ignored listing per whitelist
Feb 23 11:35:29.516: INFO: namespace e2e-tests-emptydir-n6s7z deletion completed in 8.307747571s

• [SLOW TEST:23.762 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:35:29.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 23 11:35:29.638: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 23 11:35:29.732: INFO: Waiting for terminating namespaces to be deleted...
Feb 23 11:35:29.740: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 23 11:35:29.759: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 23 11:35:29.759: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 23 11:35:29.759: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 11:35:29.759: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 23 11:35:29.759: INFO: 	Container weave ready: true, restart count 0
Feb 23 11:35:29.759: INFO: 	Container weave-npc ready: true, restart count 0
Feb 23 11:35:29.759: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 23 11:35:29.759: INFO: 	Container coredns ready: true, restart count 0
Feb 23 11:35:29.759: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 11:35:29.759: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 11:35:29.759: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 11:35:29.759: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 23 11:35:29.759: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9e264511-5630-11ea-8363-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9e264511-5630-11ea-8363-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9e264511-5630-11ea-8363-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:35:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6mjjl" for this suite.
Feb 23 11:36:14.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:36:14.450: INFO: namespace: e2e-tests-sched-pred-6mjjl, resource: bindings, ignored listing per whitelist
Feb 23 11:36:14.555: INFO: namespace e2e-tests-sched-pred-6mjjl deletion completed in 24.243681709s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:45.039 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:36:14.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:36:14.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zh6gx" for this suite.
Feb 23 11:36:38.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:36:39.100: INFO: namespace: e2e-tests-pods-zh6gx, resource: bindings, ignored listing per whitelist
Feb 23 11:36:39.157: INFO: namespace e2e-tests-pods-zh6gx deletion completed in 24.327563614s

• [SLOW TEST:24.601 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:36:39.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 23 11:36:39.624: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 23 11:36:44.641: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:36:45.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-g62z4" for this suite.
Feb 23 11:36:52.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:36:54.517: INFO: namespace: e2e-tests-replication-controller-g62z4, resource: bindings, ignored listing per whitelist
Feb 23 11:36:54.611: INFO: namespace e2e-tests-replication-controller-g62z4 deletion completed in 8.911174155s

• [SLOW TEST:15.454 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:36:54.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-xvsdt
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-xvsdt
STEP: Deleting pre-stop pod
Feb 23 11:37:20.332: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:37:20.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-xvsdt" for this suite.
Feb 23 11:38:00.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:38:00.627: INFO: namespace: e2e-tests-prestop-xvsdt, resource: bindings, ignored listing per whitelist
Feb 23 11:38:00.708: INFO: namespace e2e-tests-prestop-xvsdt deletion completed in 40.314577774s

• [SLOW TEST:66.097 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:38:00.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-m2s64/configmap-test-f21d8a87-5630-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 11:38:00.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-m2s64" to be "success or failure"
Feb 23 11:38:01.015: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 112.20724ms
Feb 23 11:38:03.257: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353518659s
Feb 23 11:38:05.318: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414828508s
Feb 23 11:38:07.699: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795231741s
Feb 23 11:38:09.720: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816574318s
Feb 23 11:38:11.731: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.828169186s
STEP: Saw pod success
Feb 23 11:38:11.731: INFO: Pod "pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:38:11.735: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008 container env-test: 
STEP: delete the pod
Feb 23 11:38:13.751: INFO: Waiting for pod pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008 to disappear
Feb 23 11:38:14.499: INFO: Pod pod-configmaps-f21fa5df-5630-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:38:14.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m2s64" for this suite.
Feb 23 11:38:20.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:38:20.709: INFO: namespace: e2e-tests-configmap-m2s64, resource: bindings, ignored listing per whitelist
Feb 23 11:38:20.781: INFO: namespace e2e-tests-configmap-m2s64 deletion completed in 6.265985899s

• [SLOW TEST:20.073 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:38:20.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-fe15155f-5630-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 11:38:20.969: INFO: Waiting up to 5m0s for pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-4sm9n" to be "success or failure"
Feb 23 11:38:21.022: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 53.046454ms
Feb 23 11:38:23.033: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0634064s
Feb 23 11:38:25.123: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154357828s
Feb 23 11:38:27.519: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549762572s
Feb 23 11:38:29.627: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.657887244s
Feb 23 11:38:31.642: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672682768s
STEP: Saw pod success
Feb 23 11:38:31.642: INFO: Pod "pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:38:31.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 11:38:31.760: INFO: Waiting for pod pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008 to disappear
Feb 23 11:38:31.776: INFO: Pod pod-secrets-fe166c0d-5630-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:38:31.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4sm9n" for this suite.
Feb 23 11:38:37.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:38:38.065: INFO: namespace: e2e-tests-secrets-4sm9n, resource: bindings, ignored listing per whitelist
Feb 23 11:38:38.176: INFO: namespace e2e-tests-secrets-4sm9n deletion completed in 6.375544468s

• [SLOW TEST:17.394 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:38:38.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 11:38:38.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-62jb5" to be "success or failure"
Feb 23 11:38:38.326: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.13253ms
Feb 23 11:38:40.488: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170755547s
Feb 23 11:38:42.511: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194003741s
Feb 23 11:38:44.536: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218892332s
Feb 23 11:38:46.605: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28830008s
Feb 23 11:38:49.473: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.155382239s
STEP: Saw pod success
Feb 23 11:38:49.473: INFO: Pod "downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:38:49.494: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 11:38:49.882: INFO: Waiting for pod downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008 to disappear
Feb 23 11:38:49.954: INFO: Pod downwardapi-volume-086e6a51-5631-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:38:49.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-62jb5" for this suite.
Feb 23 11:38:55.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:38:56.149: INFO: namespace: e2e-tests-downward-api-62jb5, resource: bindings, ignored listing per whitelist
Feb 23 11:38:56.198: INFO: namespace e2e-tests-downward-api-62jb5 deletion completed in 6.234677803s

• [SLOW TEST:18.022 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:38:56.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 23 11:39:09.184: INFO: Successfully updated pod "labelsupdate1347a63e-5631-11ea-8363-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:39:11.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bqwfz" for this suite.
Feb 23 11:39:35.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:39:35.679: INFO: namespace: e2e-tests-downward-api-bqwfz, resource: bindings, ignored listing per whitelist
Feb 23 11:39:35.682: INFO: namespace e2e-tests-downward-api-bqwfz deletion completed in 24.280973195s

• [SLOW TEST:39.483 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:39:35.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 23 11:42:37.227: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:37.271: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:39.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:39.288: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:41.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:41.295: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:43.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:43.281: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:45.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:45.287: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:47.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:47.306: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:49.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:49.325: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:51.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:51.310: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:53.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:53.278: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:55.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:55.291: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:57.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:57.289: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:42:59.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:42:59.289: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:01.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:01.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:03.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:03.295: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:05.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:05.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:07.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:07.284: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:09.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:09.289: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:11.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:11.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:13.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:13.281: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:15.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:15.288: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:17.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:17.299: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:19.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:19.308: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:21.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:21.283: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:23.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:23.286: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:25.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:25.301: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:27.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:27.291: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:29.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:29.293: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:31.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:31.285: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:33.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:33.306: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:35.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:35.301: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:37.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:37.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:39.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:39.296: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:41.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:41.285: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:43.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:43.288: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:45.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:45.283: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:47.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:47.288: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:49.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:49.287: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:51.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:51.288: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:53.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:53.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:55.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:55.292: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:57.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:57.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:43:59.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:43:59.291: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:01.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:01.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:03.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:03.295: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:05.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:05.285: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:07.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:07.289: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:09.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:09.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:11.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:11.312: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 23 11:44:13.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 23 11:44:13.314: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:44:13.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6s4qs" for this suite.
Feb 23 11:44:37.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:44:37.599: INFO: namespace: e2e-tests-container-lifecycle-hook-6s4qs, resource: bindings, ignored listing per whitelist
Feb 23 11:44:37.650: INFO: namespace e2e-tests-container-lifecycle-hook-6s4qs deletion completed in 24.317467953s

• [SLOW TEST:301.968 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:44:37.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 23 11:44:37.866: INFO: Waiting up to 5m0s for pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-9jlzg" to be "success or failure"
Feb 23 11:44:37.943: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 76.214332ms
Feb 23 11:44:39.966: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099609979s
Feb 23 11:44:41.987: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12099836s
Feb 23 11:44:44.262: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395781401s
Feb 23 11:44:46.599: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.732382839s
Feb 23 11:44:48.620: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.753309388s
STEP: Saw pod success
Feb 23 11:44:48.620: INFO: Pod "downward-api-deb6b20b-5631-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:44:48.628: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-deb6b20b-5631-11ea-8363-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 23 11:44:48.772: INFO: Waiting for pod downward-api-deb6b20b-5631-11ea-8363-0242ac110008 to disappear
Feb 23 11:44:48.791: INFO: Pod downward-api-deb6b20b-5631-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:44:48.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9jlzg" for this suite.
Feb 23 11:44:54.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:44:54.993: INFO: namespace: e2e-tests-downward-api-9jlzg, resource: bindings, ignored listing per whitelist
Feb 23 11:44:55.031: INFO: namespace e2e-tests-downward-api-9jlzg deletion completed in 6.22941155s

• [SLOW TEST:17.381 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:44:55.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:44:55.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-nc7tl" for this suite.
Feb 23 11:45:03.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:45:03.713: INFO: namespace: e2e-tests-kubelet-test-nc7tl, resource: bindings, ignored listing per whitelist
Feb 23 11:45:03.768: INFO: namespace e2e-tests-kubelet-test-nc7tl deletion completed in 8.26540156s

• [SLOW TEST:8.737 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:45:03.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ee553cd5-5631-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 11:45:04.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-zxm6j" to be "success or failure"
Feb 23 11:45:04.112: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 48.148012ms
Feb 23 11:45:06.123: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058695198s
Feb 23 11:45:08.141: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077243574s
Feb 23 11:45:10.214: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150068663s
Feb 23 11:45:13.585: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.521360516s
Feb 23 11:45:15.612: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.547757443s
STEP: Saw pod success
Feb 23 11:45:15.612: INFO: Pod "pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:45:15.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 11:45:16.420: INFO: Waiting for pod pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008 to disappear
Feb 23 11:45:16.439: INFO: Pod pod-projected-configmaps-ee56fd1b-5631-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:45:16.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zxm6j" for this suite.
Feb 23 11:45:22.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:45:22.771: INFO: namespace: e2e-tests-projected-zxm6j, resource: bindings, ignored listing per whitelist
Feb 23 11:45:22.873: INFO: namespace e2e-tests-projected-zxm6j deletion completed in 6.423220816s

• [SLOW TEST:19.104 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:45:22.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f9ac1842-5631-11ea-8363-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-f9ac1889-5631-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f9ac1842-5631-11ea-8363-0242ac110008
STEP: Updating configmap cm-test-opt-upd-f9ac1889-5631-11ea-8363-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-f9ac18a1-5631-11ea-8363-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:45:41.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pzvzl" for this suite.
Feb 23 11:46:05.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:46:05.514: INFO: namespace: e2e-tests-configmap-pzvzl, resource: bindings, ignored listing per whitelist
Feb 23 11:46:05.618: INFO: namespace e2e-tests-configmap-pzvzl deletion completed in 24.207653244s

• [SLOW TEST:42.745 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:46:05.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 23 11:46:05.881: INFO: Waiting up to 5m0s for pod "client-containers-13300efa-5632-11ea-8363-0242ac110008" in namespace "e2e-tests-containers-wknft" to be "success or failure"
Feb 23 11:46:05.946: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 65.424087ms
Feb 23 11:46:08.044: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163244736s
Feb 23 11:46:10.056: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175186005s
Feb 23 11:46:12.159: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277497982s
Feb 23 11:46:14.409: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528295624s
Feb 23 11:46:16.635: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.75446152s
STEP: Saw pod success
Feb 23 11:46:16.636: INFO: Pod "client-containers-13300efa-5632-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:46:16.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-13300efa-5632-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 11:46:16.834: INFO: Waiting for pod client-containers-13300efa-5632-11ea-8363-0242ac110008 to disappear
Feb 23 11:46:16.847: INFO: Pod client-containers-13300efa-5632-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:46:16.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wknft" for this suite.
Feb 23 11:46:23.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:46:23.160: INFO: namespace: e2e-tests-containers-wknft, resource: bindings, ignored listing per whitelist
Feb 23 11:46:23.231: INFO: namespace e2e-tests-containers-wknft deletion completed in 6.349380266s

• [SLOW TEST:17.613 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:46:23.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 11:46:23.414: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 23 11:46:28.451: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 23 11:46:34.479: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 23 11:46:34.573: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-ldhb7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ldhb7/deployments/test-cleanup-deployment,UID:2442a284-5632-11ea-a994-fa163e34d433,ResourceVersion:22641168,Generation:1,CreationTimestamp:2020-02-23 11:46:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 23 11:46:34.577: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:46:34.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ldhb7" for this suite.
Feb 23 11:46:42.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:46:42.897: INFO: namespace: e2e-tests-deployment-ldhb7, resource: bindings, ignored listing per whitelist
Feb 23 11:46:42.957: INFO: namespace e2e-tests-deployment-ldhb7 deletion completed in 8.299903658s

• [SLOW TEST:19.725 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:46:42.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-czdmx
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-czdmx
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-czdmx
Feb 23 11:46:44.420: INFO: Found 0 stateful pods, waiting for 1
Feb 23 11:46:54.436: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 23 11:47:04.441: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 23 11:47:04.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 11:47:05.304: INFO: stderr: "I0223 11:47:04.786822    1437 log.go:172] (0xc0001386e0) (0xc00071e640) Create stream\nI0223 11:47:04.787121    1437 log.go:172] (0xc0001386e0) (0xc00071e640) Stream added, broadcasting: 1\nI0223 11:47:04.796716    1437 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0223 11:47:04.796787    1437 log.go:172] (0xc0001386e0) (0xc0005e4dc0) Create stream\nI0223 11:47:04.796805    1437 log.go:172] (0xc0001386e0) (0xc0005e4dc0) Stream added, broadcasting: 3\nI0223 11:47:04.798394    1437 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0223 11:47:04.798424    1437 log.go:172] (0xc0001386e0) (0xc0005c0000) Create stream\nI0223 11:47:04.798433    1437 log.go:172] (0xc0001386e0) (0xc0005c0000) Stream added, broadcasting: 5\nI0223 11:47:04.799495    1437 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0223 11:47:05.159418    1437 log.go:172] (0xc0001386e0) Data frame received for 3\nI0223 11:47:05.159475    1437 log.go:172] (0xc0005e4dc0) (3) Data frame handling\nI0223 11:47:05.159498    1437 log.go:172] (0xc0005e4dc0) (3) Data frame sent\nI0223 11:47:05.293982    1437 log.go:172] (0xc0001386e0) Data frame received for 1\nI0223 11:47:05.294094    1437 log.go:172] (0xc0001386e0) (0xc0005c0000) Stream removed, broadcasting: 5\nI0223 11:47:05.294183    1437 log.go:172] (0xc00071e640) (1) Data frame handling\nI0223 11:47:05.294219    1437 log.go:172] (0xc00071e640) (1) Data frame sent\nI0223 11:47:05.294283    1437 log.go:172] (0xc0001386e0) (0xc0005e4dc0) Stream removed, broadcasting: 3\nI0223 11:47:05.294376    1437 log.go:172] (0xc0001386e0) (0xc00071e640) Stream removed, broadcasting: 1\nI0223 11:47:05.294421    1437 log.go:172] (0xc0001386e0) Go away received\nI0223 11:47:05.294960    1437 log.go:172] (0xc0001386e0) (0xc00071e640) Stream removed, broadcasting: 1\nI0223 11:47:05.294985    1437 log.go:172] (0xc0001386e0) (0xc0005e4dc0) Stream removed, broadcasting: 3\nI0223 11:47:05.295015    1437 log.go:172] (0xc0001386e0) (0xc0005c0000) Stream removed, broadcasting: 5\n"
Feb 23 11:47:05.304: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 11:47:05.304: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 11:47:05.324: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 23 11:47:15.345: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 11:47:15.345: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 11:47:15.405: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:15.405: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:15.406: INFO: 
Feb 23 11:47:15.406: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 23 11:47:17.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982451907s
Feb 23 11:47:19.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962188797s
Feb 23 11:47:20.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.289468106s
Feb 23 11:47:21.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.259948852s
Feb 23 11:47:22.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.228622863s
Feb 23 11:47:23.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.211789136s
Feb 23 11:47:25.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 683.015592ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-czdmx
Feb 23 11:47:26.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:47:27.171: INFO: stderr: "I0223 11:47:26.776528    1459 log.go:172] (0xc00083e2c0) (0xc000724640) Create stream\nI0223 11:47:26.776928    1459 log.go:172] (0xc00083e2c0) (0xc000724640) Stream added, broadcasting: 1\nI0223 11:47:26.845138    1459 log.go:172] (0xc00083e2c0) Reply frame received for 1\nI0223 11:47:26.845314    1459 log.go:172] (0xc00083e2c0) (0xc0002aadc0) Create stream\nI0223 11:47:26.845331    1459 log.go:172] (0xc00083e2c0) (0xc0002aadc0) Stream added, broadcasting: 3\nI0223 11:47:26.854495    1459 log.go:172] (0xc00083e2c0) Reply frame received for 3\nI0223 11:47:26.854829    1459 log.go:172] (0xc00083e2c0) (0xc0007b6000) Create stream\nI0223 11:47:26.854910    1459 log.go:172] (0xc00083e2c0) (0xc0007b6000) Stream added, broadcasting: 5\nI0223 11:47:26.857479    1459 log.go:172] (0xc00083e2c0) Reply frame received for 5\nI0223 11:47:27.042184    1459 log.go:172] (0xc00083e2c0) Data frame received for 3\nI0223 11:47:27.042348    1459 log.go:172] (0xc0002aadc0) (3) Data frame handling\nI0223 11:47:27.042378    1459 log.go:172] (0xc0002aadc0) (3) Data frame sent\nI0223 11:47:27.161435    1459 log.go:172] (0xc00083e2c0) Data frame received for 1\nI0223 11:47:27.161556    1459 log.go:172] (0xc000724640) (1) Data frame handling\nI0223 11:47:27.161619    1459 log.go:172] (0xc000724640) (1) Data frame sent\nI0223 11:47:27.162874    1459 log.go:172] (0xc00083e2c0) (0xc0007b6000) Stream removed, broadcasting: 5\nI0223 11:47:27.162971    1459 log.go:172] (0xc00083e2c0) (0xc0002aadc0) Stream removed, broadcasting: 3\nI0223 11:47:27.163012    1459 log.go:172] (0xc00083e2c0) (0xc000724640) Stream removed, broadcasting: 1\nI0223 11:47:27.163035    1459 log.go:172] (0xc00083e2c0) Go away received\nI0223 11:47:27.163846    1459 log.go:172] (0xc00083e2c0) (0xc000724640) Stream removed, broadcasting: 1\nI0223 11:47:27.163869    1459 log.go:172] (0xc00083e2c0) (0xc0002aadc0) Stream removed, broadcasting: 3\nI0223 11:47:27.163883    1459 log.go:172] (0xc00083e2c0) (0xc0007b6000) Stream removed, broadcasting: 5\n"
Feb 23 11:47:27.172: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 11:47:27.172: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 11:47:27.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:47:27.821: INFO: stderr: "I0223 11:47:27.560406    1481 log.go:172] (0xc00070c370) (0xc00074c640) Create stream\nI0223 11:47:27.560692    1481 log.go:172] (0xc00070c370) (0xc00074c640) Stream added, broadcasting: 1\nI0223 11:47:27.569495    1481 log.go:172] (0xc00070c370) Reply frame received for 1\nI0223 11:47:27.569556    1481 log.go:172] (0xc00070c370) (0xc00074c6e0) Create stream\nI0223 11:47:27.569567    1481 log.go:172] (0xc00070c370) (0xc00074c6e0) Stream added, broadcasting: 3\nI0223 11:47:27.572640    1481 log.go:172] (0xc00070c370) Reply frame received for 3\nI0223 11:47:27.572683    1481 log.go:172] (0xc00070c370) (0xc0005e6d20) Create stream\nI0223 11:47:27.572718    1481 log.go:172] (0xc00070c370) (0xc0005e6d20) Stream added, broadcasting: 5\nI0223 11:47:27.577764    1481 log.go:172] (0xc00070c370) Reply frame received for 5\nI0223 11:47:27.707645    1481 log.go:172] (0xc00070c370) Data frame received for 3\nI0223 11:47:27.707737    1481 log.go:172] (0xc00074c6e0) (3) Data frame handling\nI0223 11:47:27.707765    1481 log.go:172] (0xc00074c6e0) (3) Data frame sent\nI0223 11:47:27.707815    1481 log.go:172] (0xc00070c370) Data frame received for 5\nI0223 11:47:27.707836    1481 log.go:172] (0xc0005e6d20) (5) Data frame handling\nI0223 11:47:27.707862    1481 log.go:172] (0xc0005e6d20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0223 11:47:27.814338    1481 log.go:172] (0xc00070c370) (0xc0005e6d20) Stream removed, broadcasting: 5\nI0223 11:47:27.814449    1481 log.go:172] (0xc00070c370) Data frame received for 1\nI0223 11:47:27.814467    1481 log.go:172] (0xc00074c640) (1) Data frame handling\nI0223 11:47:27.814484    1481 log.go:172] (0xc00074c640) (1) Data frame sent\nI0223 11:47:27.814522    1481 log.go:172] (0xc00070c370) (0xc00074c6e0) Stream removed, broadcasting: 3\nI0223 11:47:27.814542    1481 log.go:172] (0xc00070c370) (0xc00074c640) Stream removed, broadcasting: 1\nI0223 11:47:27.814576    1481 log.go:172] (0xc00070c370) Go away received\nI0223 11:47:27.815329    1481 log.go:172] (0xc00070c370) (0xc00074c640) Stream removed, broadcasting: 1\nI0223 11:47:27.815342    1481 log.go:172] (0xc00070c370) (0xc00074c6e0) Stream removed, broadcasting: 3\nI0223 11:47:27.815346    1481 log.go:172] (0xc00070c370) (0xc0005e6d20) Stream removed, broadcasting: 5\n"
Feb 23 11:47:27.821: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 11:47:27.821: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 11:47:27.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:47:28.418: INFO: stderr: "I0223 11:47:28.144491    1504 log.go:172] (0xc00075a370) (0xc00064d220) Create stream\nI0223 11:47:28.144699    1504 log.go:172] (0xc00075a370) (0xc00064d220) Stream added, broadcasting: 1\nI0223 11:47:28.150359    1504 log.go:172] (0xc00075a370) Reply frame received for 1\nI0223 11:47:28.150403    1504 log.go:172] (0xc00075a370) (0xc00002a000) Create stream\nI0223 11:47:28.150415    1504 log.go:172] (0xc00075a370) (0xc00002a000) Stream added, broadcasting: 3\nI0223 11:47:28.151513    1504 log.go:172] (0xc00075a370) Reply frame received for 3\nI0223 11:47:28.151545    1504 log.go:172] (0xc00075a370) (0xc0006fa000) Create stream\nI0223 11:47:28.151559    1504 log.go:172] (0xc00075a370) (0xc0006fa000) Stream added, broadcasting: 5\nI0223 11:47:28.152657    1504 log.go:172] (0xc00075a370) Reply frame received for 5\nI0223 11:47:28.253871    1504 log.go:172] (0xc00075a370) Data frame received for 5\nI0223 11:47:28.253961    1504 log.go:172] (0xc0006fa000) (5) Data frame handling\nI0223 11:47:28.253975    1504 log.go:172] (0xc0006fa000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0223 11:47:28.254018    1504 log.go:172] (0xc00075a370) Data frame received for 3\nI0223 11:47:28.254040    1504 log.go:172] (0xc00002a000) (3) Data frame handling\nI0223 11:47:28.254056    1504 log.go:172] (0xc00002a000) (3) Data frame sent\nI0223 11:47:28.410059    1504 log.go:172] (0xc00075a370) Data frame received for 1\nI0223 11:47:28.410199    1504 log.go:172] (0xc00064d220) (1) Data frame handling\nI0223 11:47:28.410242    1504 log.go:172] (0xc00064d220) (1) Data frame sent\nI0223 11:47:28.410670    1504 log.go:172] (0xc00075a370) (0xc00002a000) Stream removed, broadcasting: 3\nI0223 11:47:28.410722    1504 log.go:172] (0xc00075a370) (0xc0006fa000) Stream removed, broadcasting: 5\nI0223 11:47:28.410803    1504 log.go:172] (0xc00075a370) (0xc00064d220) Stream removed, broadcasting: 1\nI0223 11:47:28.410962    1504 log.go:172] (0xc00075a370) Go away received\nI0223 11:47:28.411215    1504 log.go:172] (0xc00075a370) (0xc00064d220) Stream removed, broadcasting: 1\nI0223 11:47:28.411234    1504 log.go:172] (0xc00075a370) (0xc00002a000) Stream removed, broadcasting: 3\nI0223 11:47:28.411243    1504 log.go:172] (0xc00075a370) (0xc0006fa000) Stream removed, broadcasting: 5\n"
Feb 23 11:47:28.419: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 11:47:28.419: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 11:47:28.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 11:47:28.430: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 11:47:28.430: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Feb 23 11:47:38.511: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 11:47:38.511: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 11:47:38.511: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 23 11:47:38.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 11:47:39.105: INFO: stderr: "I0223 11:47:38.757848    1526 log.go:172] (0xc000758370) (0xc000778640) Create stream\nI0223 11:47:38.758105    1526 log.go:172] (0xc000758370) (0xc000778640) Stream added, broadcasting: 1\nI0223 11:47:38.763307    1526 log.go:172] (0xc000758370) Reply frame received for 1\nI0223 11:47:38.763352    1526 log.go:172] (0xc000758370) (0xc0006a4dc0) Create stream\nI0223 11:47:38.763360    1526 log.go:172] (0xc000758370) (0xc0006a4dc0) Stream added, broadcasting: 3\nI0223 11:47:38.764436    1526 log.go:172] (0xc000758370) Reply frame received for 3\nI0223 11:47:38.764562    1526 log.go:172] (0xc000758370) (0xc0006a4f00) Create stream\nI0223 11:47:38.764589    1526 log.go:172] (0xc000758370) (0xc0006a4f00) Stream added, broadcasting: 5\nI0223 11:47:38.768339    1526 log.go:172] (0xc000758370) Reply frame received for 5\nI0223 11:47:38.936905    1526 log.go:172] (0xc000758370) Data frame received for 3\nI0223 11:47:38.936956    1526 log.go:172] (0xc0006a4dc0) (3) Data frame handling\nI0223 11:47:38.936972    1526 log.go:172] (0xc0006a4dc0) (3) Data frame sent\nI0223 11:47:39.090391    1526 log.go:172] (0xc000758370) Data frame received for 1\nI0223 11:47:39.090520    1526 log.go:172] (0xc000778640) (1) Data frame handling\nI0223 11:47:39.090614    1526 log.go:172] (0xc000778640) (1) Data frame sent\nI0223 11:47:39.090666    1526 log.go:172] (0xc000758370) (0xc000778640) Stream removed, broadcasting: 1\nI0223 11:47:39.091115    1526 log.go:172] (0xc000758370) (0xc0006a4dc0) Stream removed, broadcasting: 3\nI0223 11:47:39.091825    1526 log.go:172] (0xc000758370) (0xc0006a4f00) Stream removed, broadcasting: 5\nI0223 11:47:39.092142    1526 log.go:172] (0xc000758370) Go away received\nI0223 11:47:39.092570    1526 log.go:172] (0xc000758370) (0xc000778640) Stream removed, broadcasting: 1\nI0223 11:47:39.092640    1526 log.go:172] (0xc000758370) (0xc0006a4dc0) Stream removed, broadcasting: 3\nI0223 11:47:39.092659    1526 log.go:172] (0xc000758370) (0xc0006a4f00) Stream removed, broadcasting: 5\n"
Feb 23 11:47:39.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 11:47:39.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 11:47:39.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 11:47:39.790: INFO: stderr: "I0223 11:47:39.273120    1548 log.go:172] (0xc0004a64d0) (0xc000728640) Create stream\nI0223 11:47:39.273342    1548 log.go:172] (0xc0004a64d0) (0xc000728640) Stream added, broadcasting: 1\nI0223 11:47:39.285706    1548 log.go:172] (0xc0004a64d0) Reply frame received for 1\nI0223 11:47:39.285845    1548 log.go:172] (0xc0004a64d0) (0xc0007debe0) Create stream\nI0223 11:47:39.285865    1548 log.go:172] (0xc0004a64d0) (0xc0007debe0) Stream added, broadcasting: 3\nI0223 11:47:39.289238    1548 log.go:172] (0xc0004a64d0) Reply frame received for 3\nI0223 11:47:39.289282    1548 log.go:172] (0xc0004a64d0) (0xc00067a000) Create stream\nI0223 11:47:39.289303    1548 log.go:172] (0xc0004a64d0) (0xc00067a000) Stream added, broadcasting: 5\nI0223 11:47:39.292115    1548 log.go:172] (0xc0004a64d0) Reply frame received for 5\nI0223 11:47:39.564689    1548 log.go:172] (0xc0004a64d0) Data frame received for 3\nI0223 11:47:39.564769    1548 log.go:172] (0xc0007debe0) (3) Data frame handling\nI0223 11:47:39.564818    1548 log.go:172] (0xc0007debe0) (3) Data frame sent\nI0223 11:47:39.771030    1548 log.go:172] (0xc0004a64d0) Data frame received for 1\nI0223 11:47:39.771304    1548 log.go:172] (0xc0004a64d0) (0xc0007debe0) Stream removed, broadcasting: 3\nI0223 11:47:39.771422    1548 log.go:172] (0xc000728640) (1) Data frame handling\nI0223 11:47:39.771491    1548 log.go:172] (0xc000728640) (1) Data frame sent\nI0223 11:47:39.771595    1548 log.go:172] (0xc0004a64d0) (0xc00067a000) Stream removed, broadcasting: 5\nI0223 11:47:39.771641    1548 log.go:172] (0xc0004a64d0) (0xc000728640) Stream removed, broadcasting: 1\nI0223 11:47:39.771669    1548 log.go:172] (0xc0004a64d0) Go away received\nI0223 11:47:39.772658    1548 log.go:172] (0xc0004a64d0) (0xc000728640) Stream removed, broadcasting: 1\nI0223 11:47:39.772736    1548 log.go:172] (0xc0004a64d0) (0xc0007debe0) Stream removed, broadcasting: 3\nI0223 11:47:39.772769    1548 log.go:172] (0xc0004a64d0) (0xc00067a000) Stream removed, broadcasting: 5\n"
Feb 23 11:47:39.790: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 11:47:39.790: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 11:47:39.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 11:47:40.361: INFO: stderr: "I0223 11:47:40.039607    1571 log.go:172] (0xc0006ea370) (0xc000708640) Create stream\nI0223 11:47:40.039827    1571 log.go:172] (0xc0006ea370) (0xc000708640) Stream added, broadcasting: 1\nI0223 11:47:40.045805    1571 log.go:172] (0xc0006ea370) Reply frame received for 1\nI0223 11:47:40.045844    1571 log.go:172] (0xc0006ea370) (0xc00058cdc0) Create stream\nI0223 11:47:40.045872    1571 log.go:172] (0xc0006ea370) (0xc00058cdc0) Stream added, broadcasting: 3\nI0223 11:47:40.046820    1571 log.go:172] (0xc0006ea370) Reply frame received for 3\nI0223 11:47:40.046863    1571 log.go:172] (0xc0006ea370) (0xc000676000) Create stream\nI0223 11:47:40.046888    1571 log.go:172] (0xc0006ea370) (0xc000676000) Stream added, broadcasting: 5\nI0223 11:47:40.047577    1571 log.go:172] (0xc0006ea370) Reply frame received for 5\nI0223 11:47:40.183308    1571 log.go:172] (0xc0006ea370) Data frame received for 3\nI0223 11:47:40.183411    1571 log.go:172] (0xc00058cdc0) (3) Data frame handling\nI0223 11:47:40.183432    1571 log.go:172] (0xc00058cdc0) (3) Data frame sent\nI0223 11:47:40.345653    1571 log.go:172] (0xc0006ea370) Data frame received for 1\nI0223 11:47:40.345903    1571 log.go:172] (0xc0006ea370) (0xc000676000) Stream removed, broadcasting: 5\nI0223 11:47:40.346015    1571 log.go:172] (0xc000708640) (1) Data frame handling\nI0223 11:47:40.346046    1571 log.go:172] (0xc000708640) (1) Data frame sent\nI0223 11:47:40.346058    1571 log.go:172] (0xc0006ea370) (0xc000708640) Stream removed, broadcasting: 1\nI0223 11:47:40.346796    1571 log.go:172] (0xc0006ea370) (0xc00058cdc0) Stream removed, broadcasting: 3\nI0223 11:47:40.346847    1571 log.go:172] (0xc0006ea370) Go away received\nI0223 11:47:40.347275    1571 log.go:172] (0xc0006ea370) (0xc000708640) Stream removed, broadcasting: 1\nI0223 11:47:40.347420    1571 log.go:172] (0xc0006ea370) (0xc00058cdc0) Stream removed, broadcasting: 3\nI0223 11:47:40.347441    1571 log.go:172] (0xc0006ea370) (0xc000676000) Stream removed, broadcasting: 5\n"
Feb 23 11:47:40.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 11:47:40.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 11:47:40.361: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 11:47:40.382: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 23 11:47:50.403: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 11:47:50.403: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 11:47:50.403: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 11:47:50.432: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:50.432: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:50.432: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:50.433: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:50.433: INFO: 
Feb 23 11:47:50.433: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:52.615: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:52.615: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:52.616: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:52.616: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:52.616: INFO: 
Feb 23 11:47:52.616: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:53.674: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:53.674: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:53.674: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:53.675: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:53.675: INFO: 
Feb 23 11:47:53.675: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:54.689: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:54.689: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:54.689: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:54.689: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:54.689: INFO: 
Feb 23 11:47:54.689: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:56.725: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:56.725: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:56.726: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:56.726: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:56.726: INFO: 
Feb 23 11:47:56.726: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:58.506: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:58.506: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:58.507: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:58.507: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:58.507: INFO: 
Feb 23 11:47:58.507: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 23 11:47:59.532: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 23 11:47:59.532: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:46:44 +0000 UTC  }]
Feb 23 11:47:59.533: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:59.533: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-23 11:47:15 +0000 UTC  }]
Feb 23 11:47:59.533: INFO: 
Feb 23 11:47:59.533: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-czdmx
Feb 23 11:48:00.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:00.890: INFO: rc: 1
Feb 23 11:48:00.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0026388d0 exit status 1   true [0xc001014690 0xc0010146a8 0xc0010146c0] [0xc001014690 0xc0010146a8 0xc0010146c0] [0xc0010146a0 0xc0010146b8] [0x935700 0x935700] 0xc0012aaea0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 23 11:48:10.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:11.116: INFO: rc: 1
Feb 23 11:48:11.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0026389f0 exit status 1   true [0xc0010146c8 0xc0010146e0 0xc0010146f8] [0xc0010146c8 0xc0010146e0 0xc0010146f8] [0xc0010146d8 0xc0010146f0] [0x935700 0x935700] 0xc0012ab3e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 23 11:48:21.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:21.288: INFO: rc: 1
Feb 23 11:48:21.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00229db90 exit status 1   true [0xc001d964d8 0xc001d964f0 0xc001d96508] [0xc001d964d8 0xc001d964f0 0xc001d96508] [0xc001d964e8 0xc001d96500] [0x935700 0x935700] 0xc001b045a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:48:31.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:31.490: INFO: rc: 1
Feb 23 11:48:31.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ec6180 exit status 1   true [0xc0000e6128 0xc001c3c008 0xc001c3c020] [0xc0000e6128 0xc001c3c008 0xc001c3c020] [0xc001c3c000 0xc001c3c018] [0x935700 0x935700] 0xc000d36900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:48:41.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:41.676: INFO: rc: 1
Feb 23 11:48:41.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e150 exit status 1   true [0xc001d96000 0xc001d96018 0xc001d96030] [0xc001d96000 0xc001d96018 0xc001d96030] [0xc001d96010 0xc001d96028] [0x935700 0x935700] 0xc000db2f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:48:51.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:48:51.842: INFO: rc: 1
Feb 23 11:48:51.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e2a0 exit status 1   true [0xc001d96038 0xc001d96050 0xc001d96068] [0xc001d96038 0xc001d96050 0xc001d96068] [0xc001d96048 0xc001d96060] [0x935700 0x935700] 0xc000db31a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:01.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:02.001: INFO: rc: 1
Feb 23 11:49:02.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d46150 exit status 1   true [0xc001014000 0xc001014018 0xc001014030] [0xc001014000 0xc001014018 0xc001014030] [0xc001014010 0xc001014028] [0x935700 0x935700] 0xc001562ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:12.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:12.142: INFO: rc: 1
Feb 23 11:49:12.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d46270 exit status 1   true [0xc001014038 0xc001014050 0xc001014068] [0xc001014038 0xc001014050 0xc001014068] [0xc001014048 0xc001014060] [0x935700 0x935700] 0xc001563aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:22.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:22.275: INFO: rc: 1
Feb 23 11:49:22.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d463c0 exit status 1   true [0xc001014070 0xc001014088 0xc0010140a0] [0xc001014070 0xc001014088 0xc0010140a0] [0xc001014080 0xc001014098] [0x935700 0x935700] 0xc001e1d020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:32.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:32.525: INFO: rc: 1
Feb 23 11:49:32.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d464e0 exit status 1   true [0xc0010140a8 0xc0010140c0 0xc0010140d8] [0xc0010140a8 0xc0010140c0 0xc0010140d8] [0xc0010140b8 0xc0010140d0] [0x935700 0x935700] 0xc001e1d4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:42.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:42.749: INFO: rc: 1
Feb 23 11:49:42.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d46630 exit status 1   true [0xc0010140e0 0xc0010140f8 0xc001014110] [0xc0010140e0 0xc0010140f8 0xc001014110] [0xc0010140f0 0xc001014108] [0x935700 0x935700] 0xc001be0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:49:52.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:49:52.980: INFO: rc: 1
Feb 23 11:49:52.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e3c0 exit status 1   true [0xc001d96070 0xc001d96088 0xc001d960a0] [0xc001d96070 0xc001d96088 0xc001d960a0] [0xc001d96080 0xc001d96098] [0x935700 0x935700] 0xc000db3680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:02.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:03.203: INFO: rc: 1
Feb 23 11:50:03.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e510 exit status 1   true [0xc001d960a8 0xc001d960c0 0xc001d960d8] [0xc001d960a8 0xc001d960c0 0xc001d960d8] [0xc001d960b8 0xc001d960d0] [0x935700 0x935700] 0xc000c82060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:13.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:13.359: INFO: rc: 1
Feb 23 11:50:13.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e660 exit status 1   true [0xc001d960e0 0xc001d960f8 0xc001d96110] [0xc001d960e0 0xc001d960f8 0xc001d96110] [0xc001d960f0 0xc001d96108] [0x935700 0x935700] 0xc000c82360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:23.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:23.518: INFO: rc: 1
Feb 23 11:50:23.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d469f0 exit status 1   true [0xc001014118 0xc001014130 0xc001014148] [0xc001014118 0xc001014130 0xc001014148] [0xc001014128 0xc001014140] [0x935700 0x935700] 0xc001be08a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:33.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:33.676: INFO: rc: 1
Feb 23 11:50:33.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c08d0 exit status 1   true [0xc00000e010 0xc001312008 0xc001312020] [0xc00000e010 0xc001312008 0xc001312020] [0xc0000e6128 0xc001312018] [0x935700 0x935700] 0xc001e1d140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:43.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:43.832: INFO: rc: 1
Feb 23 11:50:43.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c0fc0 exit status 1   true [0xc001312028 0xc001312040 0xc001312058] [0xc001312028 0xc001312040 0xc001312058] [0xc001312038 0xc001312050] [0x935700 0x935700] 0xc001e1d560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:50:53.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:50:53.975: INFO: rc: 1
Feb 23 11:50:53.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e180 exit status 1   true [0xc001d96000 0xc001d96018 0xc001d96030] [0xc001d96000 0xc001d96018 0xc001d96030] [0xc001d96010 0xc001d96028] [0x935700 0x935700] 0xc001562ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:03.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:04.181: INFO: rc: 1
Feb 23 11:51:04.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001abe120 exit status 1   true [0xc001c3c000 0xc001c3c018 0xc001c3c030] [0xc001c3c000 0xc001c3c018 0xc001c3c030] [0xc001c3c010 0xc001c3c028] [0x935700 0x935700] 0xc000db2f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:14.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:14.304: INFO: rc: 1
Feb 23 11:51:14.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001abe390 exit status 1   true [0xc001c3c038 0xc001c3c050 0xc001c3c078] [0xc001c3c038 0xc001c3c050 0xc001c3c078] [0xc001c3c048 0xc001c3c070] [0x935700 0x935700] 0xc000db31a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:24.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:24.519: INFO: rc: 1
Feb 23 11:51:24.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001abe570 exit status 1   true [0xc001c3c080 0xc001c3c098 0xc001c3c0b0] [0xc001c3c080 0xc001c3c098 0xc001c3c0b0] [0xc001c3c090 0xc001c3c0a8] [0x935700 0x935700] 0xc000db3680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:34.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:34.791: INFO: rc: 1
Feb 23 11:51:34.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c1140 exit status 1   true [0xc001312060 0xc001312078 0xc001312090] [0xc001312060 0xc001312078 0xc001312090] [0xc001312070 0xc001312088] [0x935700 0x935700] 0xc000c82180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:44.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:44.949: INFO: rc: 1
Feb 23 11:51:44.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c1350 exit status 1   true [0xc001312098 0xc0013120b0 0xc0013120c8] [0xc001312098 0xc0013120b0 0xc0013120c8] [0xc0013120a8 0xc0013120c0] [0x935700 0x935700] 0xc000c82420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:51:54.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:51:55.094: INFO: rc: 1
Feb 23 11:51:55.094: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e330 exit status 1   true [0xc001d96038 0xc001d96050 0xc001d96068] [0xc001d96038 0xc001d96050 0xc001d96068] [0xc001d96048 0xc001d96060] [0x935700 0x935700] 0xc001563aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:05.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:05.237: INFO: rc: 1
Feb 23 11:52:05.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c1470 exit status 1   true [0xc0013120d0 0xc0013120e8 0xc001312100] [0xc0013120d0 0xc0013120e8 0xc001312100] [0xc0013120e0 0xc0013120f8] [0x935700 0x935700] 0xc000c82720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:15.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:15.410: INFO: rc: 1
Feb 23 11:52:15.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c17a0 exit status 1   true [0xc001312108 0xc001312120 0xc001312138] [0xc001312108 0xc001312120 0xc001312138] [0xc001312118 0xc001312130] [0x935700 0x935700] 0xc000c829c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:25.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:25.571: INFO: rc: 1
Feb 23 11:52:25.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e4b0 exit status 1   true [0xc001d96078 0xc001d96090 0xc001d960a8] [0xc001d96078 0xc001d96090 0xc001d960a8] [0xc001d96088 0xc001d960a0] [0x935700 0x935700] 0xc000d36840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:35.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:35.728: INFO: rc: 1
Feb 23 11:52:35.728: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001abe150 exit status 1   true [0xc0000e60e8 0xc001d96000 0xc001d96018] [0xc0000e60e8 0xc001d96000 0xc001d96018] [0xc00000e010 0xc001d96010] [0x935700 0x935700] 0xc001562ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:45.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:45.981: INFO: rc: 1
Feb 23 11:52:45.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00138e150 exit status 1   true [0xc001c3c000 0xc001c3c018 0xc001c3c030] [0xc001c3c000 0xc001c3c018 0xc001c3c030] [0xc001c3c010 0xc001c3c028] [0x935700 0x935700] 0xc001e1d140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:52:55.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:52:56.114: INFO: rc: 1
Feb 23 11:52:56.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ec6180 exit status 1   true [0xc001312000 0xc001312018 0xc001312030] [0xc001312000 0xc001312018 0xc001312030] [0xc001312010 0xc001312028] [0x935700 0x935700] 0xc000db2f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 23 11:53:06.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-czdmx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 11:53:06.259: INFO: rc: 1
Feb 23 11:53:06.260: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 23 11:53:06.260: INFO: Scaling statefulset ss to 0
Feb 23 11:53:06.342: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 23 11:53:06.348: INFO: Deleting all statefulset in ns e2e-tests-statefulset-czdmx
Feb 23 11:53:06.354: INFO: Scaling statefulset ss to 0
Feb 23 11:53:06.380: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 11:53:06.387: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:53:06.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-czdmx" for this suite.
Feb 23 11:53:14.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:53:14.717: INFO: namespace: e2e-tests-statefulset-czdmx, resource: bindings, ignored listing per whitelist
Feb 23 11:53:14.954: INFO: namespace e2e-tests-statefulset-czdmx deletion completed in 8.364743326s

• [SLOW TEST:391.996 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:53:14.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 23 11:53:15.215: INFO: Waiting up to 5m0s for pod "pod-131a03da-5633-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-pczcv" to be "success or failure"
Feb 23 11:53:15.237: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.988764ms
Feb 23 11:53:17.248: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032879783s
Feb 23 11:53:19.259: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04422873s
Feb 23 11:53:21.582: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366678284s
Feb 23 11:53:23.918: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.703329595s
Feb 23 11:53:25.992: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.777195105s
STEP: Saw pod success
Feb 23 11:53:25.992: INFO: Pod "pod-131a03da-5633-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:53:26.000: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-131a03da-5633-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 11:53:26.380: INFO: Waiting for pod pod-131a03da-5633-11ea-8363-0242ac110008 to disappear
Feb 23 11:53:26.477: INFO: Pod pod-131a03da-5633-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:53:26.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pczcv" for this suite.
Feb 23 11:53:32.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:53:32.704: INFO: namespace: e2e-tests-emptydir-pczcv, resource: bindings, ignored listing per whitelist
Feb 23 11:53:32.744: INFO: namespace e2e-tests-emptydir-pczcv deletion completed in 6.254244368s

• [SLOW TEST:17.791 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:53:32.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1db5e186-5633-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 11:53:33.037: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-7kmmc" to be "success or failure"
Feb 23 11:53:33.042: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.108321ms
Feb 23 11:53:35.403: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365693154s
Feb 23 11:53:37.420: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382891852s
Feb 23 11:53:39.435: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398258633s
Feb 23 11:53:41.450: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412948426s
Feb 23 11:53:43.464: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426609731s
STEP: Saw pod success
Feb 23 11:53:43.464: INFO: Pod "pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:53:43.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 11:53:44.195: INFO: Waiting for pod pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008 to disappear
Feb 23 11:53:44.214: INFO: Pod pod-projected-configmaps-1db6e33a-5633-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:53:44.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7kmmc" for this suite.
Feb 23 11:53:50.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:53:50.768: INFO: namespace: e2e-tests-projected-7kmmc, resource: bindings, ignored listing per whitelist
Feb 23 11:53:50.816: INFO: namespace e2e-tests-projected-7kmmc deletion completed in 6.594878975s

• [SLOW TEST:18.071 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:53:50.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-28784f10-5633-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-28784f10-5633-11ea-8363-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:55:23.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-splkt" for this suite.
Feb 23 11:55:47.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:55:47.913: INFO: namespace: e2e-tests-configmap-splkt, resource: bindings, ignored listing per whitelist
Feb 23 11:55:48.073: INFO: namespace e2e-tests-configmap-splkt deletion completed in 24.297934558s

• [SLOW TEST:117.256 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:55:48.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-hjjgx
Feb 23 11:55:58.406: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-hjjgx
STEP: checking the pod's current state and verifying that restartCount is present
Feb 23 11:55:58.411: INFO: Initial restart count of pod liveness-exec is 0
Feb 23 11:56:49.316: INFO: Restart count of pod e2e-tests-container-probe-hjjgx/liveness-exec is now 1 (50.904743071s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:56:49.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hjjgx" for this suite.
Feb 23 11:56:55.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:56:55.538: INFO: namespace: e2e-tests-container-probe-hjjgx, resource: bindings, ignored listing per whitelist
Feb 23 11:56:55.577: INFO: namespace e2e-tests-container-probe-hjjgx deletion completed in 6.190932913s

• [SLOW TEST:67.504 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:56:55.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rwpcz
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 23 11:56:55.791: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 23 11:57:32.136: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-rwpcz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 11:57:32.136: INFO: >>> kubeConfig: /root/.kube/config
I0223 11:57:32.240700       8 log.go:172] (0xc0000fd1e0) (0xc0011ea3c0) Create stream
I0223 11:57:32.240739       8 log.go:172] (0xc0000fd1e0) (0xc0011ea3c0) Stream added, broadcasting: 1
I0223 11:57:32.246276       8 log.go:172] (0xc0000fd1e0) Reply frame received for 1
I0223 11:57:32.246329       8 log.go:172] (0xc0000fd1e0) (0xc0011ea460) Create stream
I0223 11:57:32.246350       8 log.go:172] (0xc0000fd1e0) (0xc0011ea460) Stream added, broadcasting: 3
I0223 11:57:32.247785       8 log.go:172] (0xc0000fd1e0) Reply frame received for 3
I0223 11:57:32.247817       8 log.go:172] (0xc0000fd1e0) (0xc00256cc80) Create stream
I0223 11:57:32.247824       8 log.go:172] (0xc0000fd1e0) (0xc00256cc80) Stream added, broadcasting: 5
I0223 11:57:32.250116       8 log.go:172] (0xc0000fd1e0) Reply frame received for 5
I0223 11:57:33.422968       8 log.go:172] (0xc0000fd1e0) Data frame received for 3
I0223 11:57:33.423037       8 log.go:172] (0xc0011ea460) (3) Data frame handling
I0223 11:57:33.423062       8 log.go:172] (0xc0011ea460) (3) Data frame sent
I0223 11:57:33.653342       8 log.go:172] (0xc0000fd1e0) (0xc0011ea460) Stream removed, broadcasting: 3
I0223 11:57:33.653499       8 log.go:172] (0xc0000fd1e0) Data frame received for 1
I0223 11:57:33.653513       8 log.go:172] (0xc0011ea3c0) (1) Data frame handling
I0223 11:57:33.653534       8 log.go:172] (0xc0011ea3c0) (1) Data frame sent
I0223 11:57:33.653566       8 log.go:172] (0xc0000fd1e0) (0xc0011ea3c0) Stream removed, broadcasting: 1
I0223 11:57:33.661681       8 log.go:172] (0xc0000fd1e0) (0xc00256cc80) Stream removed, broadcasting: 5
I0223 11:57:33.661850       8 log.go:172] (0xc0000fd1e0) Go away received
I0223 11:57:33.661975       8 log.go:172] (0xc0000fd1e0) (0xc0011ea3c0) Stream removed, broadcasting: 1
I0223 11:57:33.662017       8 log.go:172] (0xc0000fd1e0) (0xc0011ea460) Stream removed, broadcasting: 3
I0223 11:57:33.662041       8 log.go:172] (0xc0000fd1e0) (0xc00256cc80) Stream removed, broadcasting: 5
Feb 23 11:57:33.662: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:57:33.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rwpcz" for this suite.
Feb 23 11:57:57.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:57:57.912: INFO: namespace: e2e-tests-pod-network-test-rwpcz, resource: bindings, ignored listing per whitelist
Feb 23 11:57:57.936: INFO: namespace e2e-tests-pod-network-test-rwpcz deletion completed in 24.257744894s

• [SLOW TEST:62.358 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:57:57.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 23 11:57:58.156: INFO: Waiting up to 5m0s for pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-k2gs4" to be "success or failure"
Feb 23 11:57:58.198: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 41.578456ms
Feb 23 11:58:00.405: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24906829s
Feb 23 11:58:02.422: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265736329s
Feb 23 11:58:05.433: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.276666064s
Feb 23 11:58:07.452: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.295559386s
Feb 23 11:58:09.468: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.31177501s
STEP: Saw pod success
Feb 23 11:58:09.468: INFO: Pod "pod-bbbfc9cc-5633-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 11:58:09.483: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bbbfc9cc-5633-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 11:58:09.581: INFO: Waiting for pod pod-bbbfc9cc-5633-11ea-8363-0242ac110008 to disappear
Feb 23 11:58:09.590: INFO: Pod pod-bbbfc9cc-5633-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:58:09.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k2gs4" for this suite.
Feb 23 11:58:17.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:58:17.752: INFO: namespace: e2e-tests-emptydir-k2gs4, resource: bindings, ignored listing per whitelist
Feb 23 11:58:17.881: INFO: namespace e2e-tests-emptydir-k2gs4 deletion completed in 8.286268823s

• [SLOW TEST:19.945 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:58:17.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 23 11:58:18.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-57gj5 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 23 11:58:32.259: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0223 11:58:30.790012    2217 log.go:172] (0xc000138790) (0xc0008846e0) Create stream\nI0223 11:58:30.790181    2217 log.go:172] (0xc000138790) (0xc0008846e0) Stream added, broadcasting: 1\nI0223 11:58:30.795643    2217 log.go:172] (0xc000138790) Reply frame received for 1\nI0223 11:58:30.795686    2217 log.go:172] (0xc000138790) (0xc00090a780) Create stream\nI0223 11:58:30.795695    2217 log.go:172] (0xc000138790) (0xc00090a780) Stream added, broadcasting: 3\nI0223 11:58:30.796574    2217 log.go:172] (0xc000138790) Reply frame received for 3\nI0223 11:58:30.796600    2217 log.go:172] (0xc000138790) (0xc000884780) Create stream\nI0223 11:58:30.796607    2217 log.go:172] (0xc000138790) (0xc000884780) Stream added, broadcasting: 5\nI0223 11:58:30.797426    2217 log.go:172] (0xc000138790) Reply frame received for 5\nI0223 11:58:30.797449    2217 log.go:172] (0xc000138790) (0xc000884820) Create stream\nI0223 11:58:30.797463    2217 log.go:172] (0xc000138790) (0xc000884820) Stream added, broadcasting: 7\nI0223 11:58:30.798598    2217 log.go:172] (0xc000138790) Reply frame received for 7\nI0223 11:58:30.798784    2217 log.go:172] (0xc00090a780) (3) Writing data frame\nI0223 11:58:30.798929    2217 log.go:172] (0xc00090a780) (3) Writing data frame\nI0223 11:58:30.803203    2217 log.go:172] (0xc000138790) Data frame received for 5\nI0223 11:58:30.803220    2217 log.go:172] (0xc000884780) (5) Data frame handling\nI0223 11:58:30.803232    2217 log.go:172] (0xc000884780) (5) Data frame sent\nI0223 11:58:30.809125    2217 log.go:172] (0xc000138790) Data frame received for 5\nI0223 11:58:30.809140    2217 log.go:172] (0xc000884780) (5) Data frame handling\nI0223 11:58:30.809151    2217 log.go:172] (0xc000884780) (5) Data frame sent\nI0223 11:58:32.172093    2217 log.go:172] (0xc000138790) Data frame received for 1\nI0223 11:58:32.172280    2217 log.go:172] (0xc000138790) (0xc00090a780) Stream removed, broadcasting: 3\nI0223 11:58:32.172465    2217 log.go:172] (0xc0008846e0) (1) Data frame handling\nI0223 11:58:32.172565    2217 log.go:172] (0xc0008846e0) (1) Data frame sent\nI0223 11:58:32.172597    2217 log.go:172] (0xc000138790) (0xc000884780) Stream removed, broadcasting: 5\nI0223 11:58:32.172656    2217 log.go:172] (0xc000138790) (0xc000884820) Stream removed, broadcasting: 7\nI0223 11:58:32.172719    2217 log.go:172] (0xc000138790) (0xc0008846e0) Stream removed, broadcasting: 1\nI0223 11:58:32.172797    2217 log.go:172] (0xc000138790) Go away received\nI0223 11:58:32.173472    2217 log.go:172] (0xc000138790) (0xc0008846e0) Stream removed, broadcasting: 1\nI0223 11:58:32.173498    2217 log.go:172] (0xc000138790) (0xc00090a780) Stream removed, broadcasting: 3\nI0223 11:58:32.173508    2217 log.go:172] (0xc000138790) (0xc000884780) Stream removed, broadcasting: 5\nI0223 11:58:32.173516    2217 log.go:172] (0xc000138790) (0xc000884820) Stream removed, broadcasting: 7\n"
Feb 23 11:58:32.260: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 11:58:34.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-57gj5" for this suite.
Feb 23 11:58:40.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 11:58:40.729: INFO: namespace: e2e-tests-kubectl-57gj5, resource: bindings, ignored listing per whitelist
Feb 23 11:58:40.760: INFO: namespace e2e-tests-kubectl-57gj5 deletion completed in 6.471945504s

• [SLOW TEST:22.879 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 11:58:40.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 23 11:58:42.100: INFO: Pod name wrapped-volume-race-d5ed6c1b-5633-11ea-8363-0242ac110008: Found 0 pods out of 5
Feb 23 11:58:47.120: INFO: Pod name wrapped-volume-race-d5ed6c1b-5633-11ea-8363-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d5ed6c1b-5633-11ea-8363-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2snbn, will wait for the garbage collector to delete the pods
Feb 23 12:00:41.296: INFO: Deleting ReplicationController wrapped-volume-race-d5ed6c1b-5633-11ea-8363-0242ac110008 took: 23.948527ms
Feb 23 12:00:43.097: INFO: Terminating ReplicationController wrapped-volume-race-d5ed6c1b-5633-11ea-8363-0242ac110008 pods took: 1.801013263s
STEP: Creating RC which spawns configmap-volume pods
Feb 23 12:01:33.308: INFO: Pod name wrapped-volume-race-3bee3986-5634-11ea-8363-0242ac110008: Found 0 pods out of 5
Feb 23 12:01:38.339: INFO: Pod name wrapped-volume-race-3bee3986-5634-11ea-8363-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3bee3986-5634-11ea-8363-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2snbn, will wait for the garbage collector to delete the pods
Feb 23 12:04:24.476: INFO: Deleting ReplicationController wrapped-volume-race-3bee3986-5634-11ea-8363-0242ac110008 took: 30.123508ms
Feb 23 12:04:24.877: INFO: Terminating ReplicationController wrapped-volume-race-3bee3986-5634-11ea-8363-0242ac110008 pods took: 400.891662ms
STEP: Creating RC which spawns configmap-volume pods
Feb 23 12:05:13.573: INFO: Pod name wrapped-volume-race-bf3a60ed-5634-11ea-8363-0242ac110008: Found 0 pods out of 5
Feb 23 12:05:18.626: INFO: Pod name wrapped-volume-race-bf3a60ed-5634-11ea-8363-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bf3a60ed-5634-11ea-8363-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2snbn, will wait for the garbage collector to delete the pods
Feb 23 12:07:02.764: INFO: Deleting ReplicationController wrapped-volume-race-bf3a60ed-5634-11ea-8363-0242ac110008 took: 22.307942ms
Feb 23 12:07:03.065: INFO: Terminating ReplicationController wrapped-volume-race-bf3a60ed-5634-11ea-8363-0242ac110008 pods took: 300.621312ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:07:54.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2snbn" for this suite.
Feb 23 12:08:04.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:08:05.051: INFO: namespace: e2e-tests-emptydir-wrapper-2snbn, resource: bindings, ignored listing per whitelist
Feb 23 12:08:05.200: INFO: namespace e2e-tests-emptydir-wrapper-2snbn deletion completed in 10.361842556s

• [SLOW TEST:564.440 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:08:05.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 12:08:05.542: INFO: Creating ReplicaSet my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008
Feb 23 12:08:05.576: INFO: Pod name my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008: Found 0 pods out of 1
Feb 23 12:08:11.255: INFO: Pod name my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008: Found 1 pods out of 1
Feb 23 12:08:11.255: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008" is running
Feb 23 12:08:19.377: INFO: Pod "my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008-nfrzt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:08:05 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:08:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:08:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:08:05 +0000 UTC Reason: Message:}])
Feb 23 12:08:19.377: INFO: Trying to dial the pod
Feb 23 12:08:24.422: INFO: Controller my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008: Got expected result from replica 1 [my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008-nfrzt]: "my-hostname-basic-25c94631-5635-11ea-8363-0242ac110008-nfrzt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:08:24.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-t9gxs" for this suite.
Feb 23 12:08:30.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:08:30.662: INFO: namespace: e2e-tests-replicaset-t9gxs, resource: bindings, ignored listing per whitelist
Feb 23 12:08:30.748: INFO: namespace e2e-tests-replicaset-t9gxs deletion completed in 6.313815543s

• [SLOW TEST:25.546 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:08:30.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-34f66422-5635-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:08:31.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-xmd4t" to be "success or failure"
Feb 23 12:08:31.142: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.748248ms
Feb 23 12:08:34.711: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597172176s
Feb 23 12:08:36.827: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.71364678s
Feb 23 12:08:39.056: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.941896355s
Feb 23 12:08:41.069: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.955673906s
Feb 23 12:08:43.080: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.965837163s
Feb 23 12:08:45.544: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.430142217s
STEP: Saw pod success
Feb 23 12:08:45.544: INFO: Pod "pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:08:45.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 23 12:08:45.952: INFO: Waiting for pod pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008 to disappear
Feb 23 12:08:46.014: INFO: Pod pod-configmaps-34f7e4e9-5635-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:08:46.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xmd4t" for this suite.
Feb 23 12:08:54.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:08:54.102: INFO: namespace: e2e-tests-configmap-xmd4t, resource: bindings, ignored listing per whitelist
Feb 23 12:08:54.199: INFO: namespace e2e-tests-configmap-xmd4t deletion completed in 8.177568344s

• [SLOW TEST:23.451 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:08:54.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-42ed3601-5635-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:08:54.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-fsnbj" to be "success or failure"
Feb 23 12:08:54.554: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 90.005749ms
Feb 23 12:08:56.573: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108702205s
Feb 23 12:08:58.611: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1465093s
Feb 23 12:09:00.679: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214483952s
Feb 23 12:09:02.981: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516083227s
Feb 23 12:09:05.209: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.744448272s
STEP: Saw pod success
Feb 23 12:09:05.209: INFO: Pod "pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:09:05.254: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 12:09:05.920: INFO: Waiting for pod pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008 to disappear
Feb 23 12:09:05.953: INFO: Pod pod-projected-configmaps-42ee394e-5635-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:09:05.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fsnbj" for this suite.
Feb 23 12:09:12.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:09:12.308: INFO: namespace: e2e-tests-projected-fsnbj, resource: bindings, ignored listing per whitelist
Feb 23 12:09:12.339: INFO: namespace e2e-tests-projected-fsnbj deletion completed in 6.373287859s

• [SLOW TEST:18.139 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:09:12.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 23 12:09:12.582: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix745041037/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:09:12.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vpkd7" for this suite.
Feb 23 12:09:18.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:09:18.828: INFO: namespace: e2e-tests-kubectl-vpkd7, resource: bindings, ignored listing per whitelist
Feb 23 12:09:18.882: INFO: namespace e2e-tests-kubectl-vpkd7 deletion completed in 6.191230943s

• [SLOW TEST:6.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:09:18.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 23 12:09:19.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:21.946: INFO: stderr: ""
Feb 23 12:09:21.946: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 12:09:21.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:22.086: INFO: stderr: ""
Feb 23 12:09:22.087: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 23 12:09:27.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:27.247: INFO: stderr: ""
Feb 23 12:09:27.247: INFO: stdout: "update-demo-nautilus-g6tf2 update-demo-nautilus-zs4x4 "
Feb 23 12:09:27.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6tf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:27.373: INFO: stderr: ""
Feb 23 12:09:27.373: INFO: stdout: ""
Feb 23 12:09:27.373: INFO: update-demo-nautilus-g6tf2 is created but not running
Feb 23 12:09:32.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:32.499: INFO: stderr: ""
Feb 23 12:09:32.499: INFO: stdout: "update-demo-nautilus-g6tf2 update-demo-nautilus-zs4x4 "
Feb 23 12:09:32.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6tf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:32.608: INFO: stderr: ""
Feb 23 12:09:32.608: INFO: stdout: ""
Feb 23 12:09:32.608: INFO: update-demo-nautilus-g6tf2 is created but not running
Feb 23 12:09:37.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:37.830: INFO: stderr: ""
Feb 23 12:09:37.830: INFO: stdout: "update-demo-nautilus-g6tf2 update-demo-nautilus-zs4x4 "
Feb 23 12:09:37.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6tf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:37.969: INFO: stderr: ""
Feb 23 12:09:37.969: INFO: stdout: "true"
Feb 23 12:09:37.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6tf2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:38.093: INFO: stderr: ""
Feb 23 12:09:38.093: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:09:38.093: INFO: validating pod update-demo-nautilus-g6tf2
Feb 23 12:09:38.134: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:09:38.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:09:38.134: INFO: update-demo-nautilus-g6tf2 is verified up and running
Feb 23 12:09:38.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zs4x4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:38.242: INFO: stderr: ""
Feb 23 12:09:38.242: INFO: stdout: "true"
Feb 23 12:09:38.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zs4x4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:09:38.414: INFO: stderr: ""
Feb 23 12:09:38.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:09:38.415: INFO: validating pod update-demo-nautilus-zs4x4
Feb 23 12:09:38.435: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:09:38.435: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:09:38.435: INFO: update-demo-nautilus-zs4x4 is verified up and running
STEP: rolling-update to new replication controller
Feb 23 12:09:38.437: INFO: scanned /root for discovery docs: 
Feb 23 12:09:38.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.204: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 23 12:10:14.204: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 12:10:14.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.375: INFO: stderr: ""
Feb 23 12:10:14.375: INFO: stdout: "update-demo-kitten-9qhxk update-demo-kitten-gknv7 "
Feb 23 12:10:14.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9qhxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.547: INFO: stderr: ""
Feb 23 12:10:14.547: INFO: stdout: "true"
Feb 23 12:10:14.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9qhxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.663: INFO: stderr: ""
Feb 23 12:10:14.663: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 23 12:10:14.663: INFO: validating pod update-demo-kitten-9qhxk
Feb 23 12:10:14.704: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 23 12:10:14.704: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 23 12:10:14.704: INFO: update-demo-kitten-9qhxk is verified up and running
Feb 23 12:10:14.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gknv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.843: INFO: stderr: ""
Feb 23 12:10:14.843: INFO: stdout: "true"
Feb 23 12:10:14.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gknv7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r2k6f'
Feb 23 12:10:14.994: INFO: stderr: ""
Feb 23 12:10:14.994: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 23 12:10:14.994: INFO: validating pod update-demo-kitten-gknv7
Feb 23 12:10:15.003: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 23 12:10:15.003: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 23 12:10:15.004: INFO: update-demo-kitten-gknv7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:10:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r2k6f" for this suite.
Feb 23 12:10:49.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:10:49.430: INFO: namespace: e2e-tests-kubectl-r2k6f, resource: bindings, ignored listing per whitelist
Feb 23 12:10:49.438: INFO: namespace e2e-tests-kubectl-r2k6f deletion completed in 34.42710744s

• [SLOW TEST:90.556 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:10:49.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 23 12:10:49.654: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 23 12:10:49.665: INFO: Waiting for terminating namespaces to be deleted...
Feb 23 12:10:49.669: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 23 12:10:49.680: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 23 12:10:49.680: INFO: 	Container coredns ready: true, restart count 0
Feb 23 12:10:49.680: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 23 12:10:49.680: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 23 12:10:49.680: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 12:10:49.680: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 23 12:10:49.680: INFO: 	Container weave ready: true, restart count 0
Feb 23 12:10:49.680: INFO: 	Container weave-npc ready: true, restart count 0
Feb 23 12:10:49.680: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 23 12:10:49.680: INFO: 	Container coredns ready: true, restart count 0
Feb 23 12:10:49.680: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 12:10:49.680: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 23 12:10:49.680: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 23 12:10:49.827: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87b52ab9-5635-11ea-8363-0242ac110008.15f606d96876e664], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zpzbr/filler-pod-87b52ab9-5635-11ea-8363-0242ac110008 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87b52ab9-5635-11ea-8363-0242ac110008.15f606da982b227a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87b52ab9-5635-11ea-8363-0242ac110008.15f606db488031da], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87b52ab9-5635-11ea-8363-0242ac110008.15f606db79c93fd4], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f606dbc2c2f9b6], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:11:01.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zpzbr" for this suite.
Feb 23 12:11:09.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:11:09.444: INFO: namespace: e2e-tests-sched-pred-zpzbr, resource: bindings, ignored listing per whitelist
Feb 23 12:11:09.607: INFO: namespace e2e-tests-sched-pred-zpzbr deletion completed in 8.393586205s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.168 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:11:09.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:11:22.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-k7njj" for this suite.
Feb 23 12:12:04.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:12:04.224: INFO: namespace: e2e-tests-kubelet-test-k7njj, resource: bindings, ignored listing per whitelist
Feb 23 12:12:04.315: INFO: namespace e2e-tests-kubelet-test-k7njj deletion completed in 42.160525113s

• [SLOW TEST:54.708 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:12:04.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-h97fx in namespace e2e-tests-proxy-68j2n
I0223 12:12:04.720573       8 runners.go:184] Created replication controller with name: proxy-service-h97fx, namespace: e2e-tests-proxy-68j2n, replica count: 1
I0223 12:12:05.771200       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:06.771525       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:07.771847       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:08.772374       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:09.772715       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:10.773143       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:11.773447       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:12.773693       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 12:12:13.774133       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0223 12:12:14.774508       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0223 12:12:15.774911       8 runners.go:184] proxy-service-h97fx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 23 12:12:15.792: INFO: setup took 11.242214495s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 23 12:12:15.924: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-68j2n/pods/http:proxy-service-h97fx-9wn46:162/proxy/: bar (200; 131.208559ms)
Feb 23 12:12:15.924: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-68j2n/services/http:proxy-service-h97fx:portname2/proxy/: bar (200; 130.853716ms)
Feb 23 12:12:15.924: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-68j2n/services/proxy-service-h97fx:portname2/proxy/: bar (200; 130.76118ms)
Feb 23 12:12:15.924: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-68j2n/pods/proxy-service-h97fx-9wn46:162/proxy/: bar (200; 130.937366ms)
Feb 23 12:12:15.925: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-68j2n/pods/http:proxy-service-h97fx-9wn46:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb 23 12:12:32.041: INFO: Waiting up to 5m0s for pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd" in namespace "e2e-tests-svcaccounts-mjd6s" to be "success or failure"
Feb 23 12:12:32.050: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.039721ms
Feb 23 12:12:34.100: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058493949s
Feb 23 12:12:36.116: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074753128s
Feb 23 12:12:38.981: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.939363544s
Feb 23 12:12:41.004: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962613124s
Feb 23 12:12:43.029: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.988188013s
Feb 23 12:12:45.043: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.001560887s
Feb 23 12:12:47.083: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.042045905s
Feb 23 12:12:49.201: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.159846251s
STEP: Saw pod success
Feb 23 12:12:49.201: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd" satisfied condition "success or failure"
Feb 23 12:12:49.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd container token-test: 
STEP: delete the pod
Feb 23 12:12:49.391: INFO: Waiting for pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd to disappear
Feb 23 12:12:49.418: INFO: Pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-lbzxd no longer exists
STEP: Creating a pod to test consume service account root CA
Feb 23 12:12:49.425: INFO: Waiting up to 5m0s for pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj" in namespace "e2e-tests-svcaccounts-mjd6s" to be "success or failure"
Feb 23 12:12:49.616: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 191.342995ms
Feb 23 12:12:51.938: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512871675s
Feb 23 12:12:53.978: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553060139s
Feb 23 12:12:56.110: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68555833s
Feb 23 12:12:58.128: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702996652s
Feb 23 12:13:00.339: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.913861486s
Feb 23 12:13:02.360: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.934878435s
Feb 23 12:13:04.572: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.146904162s
Feb 23 12:13:06.628: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.203624609s
Feb 23 12:13:09.098: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.673128177s
STEP: Saw pod success
Feb 23 12:13:09.098: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj" satisfied condition "success or failure"
Feb 23 12:13:09.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj container root-ca-test: 
STEP: delete the pod
Feb 23 12:13:09.323: INFO: Waiting for pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj to disappear
Feb 23 12:13:09.341: INFO: Pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-bc8vj no longer exists
STEP: Creating a pod to test consume service account namespace
Feb 23 12:13:09.458: INFO: Waiting up to 5m0s for pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l" in namespace "e2e-tests-svcaccounts-mjd6s" to be "success or failure"
Feb 23 12:13:09.473: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.504982ms
Feb 23 12:13:12.041: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582488804s
Feb 23 12:13:14.053: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5948395s
Feb 23 12:13:16.919: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 7.460241823s
Feb 23 12:13:19.526: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067818366s
Feb 23 12:13:21.541: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.082519229s
Feb 23 12:13:23.556: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.097799094s
Feb 23 12:13:25.585: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.126245001s
Feb 23 12:13:27.600: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.141308694s
STEP: Saw pod success
Feb 23 12:13:27.600: INFO: Pod "pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l" satisfied condition "success or failure"
Feb 23 12:13:27.604: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l container namespace-test: 
STEP: delete the pod
Feb 23 12:13:28.410: INFO: Waiting for pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l to disappear
Feb 23 12:13:28.425: INFO: Pod pod-service-account-c49eeede-5635-11ea-8363-0242ac110008-qhn7l no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:13:28.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-mjd6s" for this suite.
Feb 23 12:13:36.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:13:36.788: INFO: namespace: e2e-tests-svcaccounts-mjd6s, resource: bindings, ignored listing per whitelist
Feb 23 12:13:36.894: INFO: namespace e2e-tests-svcaccounts-mjd6s deletion completed in 8.242460362s

• [SLOW TEST:65.628 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:13:36.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-k7szj
Feb 23 12:13:49.205: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-k7szj
STEP: checking the pod's current state and verifying that restartCount is present
Feb 23 12:13:49.213: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:17:49.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-k7szj" for this suite.
Feb 23 12:17:57.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:17:58.074: INFO: namespace: e2e-tests-container-probe-k7szj, resource: bindings, ignored listing per whitelist
Feb 23 12:17:58.214: INFO: namespace e2e-tests-container-probe-k7szj deletion completed in 8.284553218s

• [SLOW TEST:261.319 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:17:58.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-cmg5x
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 23 12:17:58.614: INFO: Found 0 stateful pods, waiting for 3
Feb 23 12:18:08.816: INFO: Found 2 stateful pods, waiting for 3
Feb 23 12:18:19.000: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 12:18:19.000: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 12:18:19.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 12:18:28.650: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 12:18:28.650: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 12:18:28.650: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 12:18:28.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cmg5x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 12:18:29.298: INFO: stderr: "I0223 12:18:28.946800    2633 log.go:172] (0xc00014c6e0) (0xc0006635e0) Create stream\nI0223 12:18:28.947171    2633 log.go:172] (0xc00014c6e0) (0xc0006635e0) Stream added, broadcasting: 1\nI0223 12:18:28.952186    2633 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0223 12:18:28.952243    2633 log.go:172] (0xc00014c6e0) (0xc00074c000) Create stream\nI0223 12:18:28.952255    2633 log.go:172] (0xc00014c6e0) (0xc00074c000) Stream added, broadcasting: 3\nI0223 12:18:28.954267    2633 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0223 12:18:28.954320    2633 log.go:172] (0xc00014c6e0) (0xc00009a000) Create stream\nI0223 12:18:28.954340    2633 log.go:172] (0xc00014c6e0) (0xc00009a000) Stream added, broadcasting: 5\nI0223 12:18:28.955536    2633 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0223 12:18:29.160433    2633 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0223 12:18:29.160512    2633 log.go:172] (0xc00074c000) (3) Data frame handling\nI0223 12:18:29.160546    2633 log.go:172] (0xc00074c000) (3) Data frame sent\nI0223 12:18:29.285851    2633 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0223 12:18:29.286055    2633 log.go:172] (0xc00014c6e0) (0xc00074c000) Stream removed, broadcasting: 3\nI0223 12:18:29.286137    2633 log.go:172] (0xc0006635e0) (1) Data frame handling\nI0223 12:18:29.286159    2633 log.go:172] (0xc0006635e0) (1) Data frame sent\nI0223 12:18:29.286357    2633 log.go:172] (0xc00014c6e0) (0xc00009a000) Stream removed, broadcasting: 5\nI0223 12:18:29.286403    2633 log.go:172] (0xc00014c6e0) (0xc0006635e0) Stream removed, broadcasting: 1\nI0223 12:18:29.286415    2633 log.go:172] (0xc00014c6e0) Go away received\nI0223 12:18:29.287085    2633 log.go:172] (0xc00014c6e0) (0xc0006635e0) Stream removed, broadcasting: 1\nI0223 12:18:29.287096    2633 log.go:172] (0xc00014c6e0) (0xc00074c000) Stream removed, broadcasting: 3\nI0223 12:18:29.287100    2633 log.go:172] (0xc00014c6e0) (0xc00009a000) Stream removed, broadcasting: 5\n"
Feb 23 12:18:29.299: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 12:18:29.299: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 23 12:18:39.386: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 23 12:18:49.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cmg5x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 12:18:50.106: INFO: stderr: "I0223 12:18:49.676074    2655 log.go:172] (0xc0005b22c0) (0xc0000ed540) Create stream\nI0223 12:18:49.676449    2655 log.go:172] (0xc0005b22c0) (0xc0000ed540) Stream added, broadcasting: 1\nI0223 12:18:49.683593    2655 log.go:172] (0xc0005b22c0) Reply frame received for 1\nI0223 12:18:49.683651    2655 log.go:172] (0xc0005b22c0) (0xc0000ed5e0) Create stream\nI0223 12:18:49.683659    2655 log.go:172] (0xc0005b22c0) (0xc0000ed5e0) Stream added, broadcasting: 3\nI0223 12:18:49.684967    2655 log.go:172] (0xc0005b22c0) Reply frame received for 3\nI0223 12:18:49.684987    2655 log.go:172] (0xc0005b22c0) (0xc0000ed680) Create stream\nI0223 12:18:49.684993    2655 log.go:172] (0xc0005b22c0) (0xc0000ed680) Stream added, broadcasting: 5\nI0223 12:18:49.686466    2655 log.go:172] (0xc0005b22c0) Reply frame received for 5\nI0223 12:18:49.882398    2655 log.go:172] (0xc0005b22c0) Data frame received for 3\nI0223 12:18:49.882533    2655 log.go:172] (0xc0000ed5e0) (3) Data frame handling\nI0223 12:18:49.882675    2655 log.go:172] (0xc0000ed5e0) (3) Data frame sent\nI0223 12:18:50.088613    2655 log.go:172] (0xc0005b22c0) (0xc0000ed5e0) Stream removed, broadcasting: 3\nI0223 12:18:50.088951    2655 log.go:172] (0xc0005b22c0) Data frame received for 1\nI0223 12:18:50.088989    2655 log.go:172] (0xc0000ed540) (1) Data frame handling\nI0223 12:18:50.089043    2655 log.go:172] (0xc0000ed540) (1) Data frame sent\nI0223 12:18:50.089062    2655 log.go:172] (0xc0005b22c0) (0xc0000ed540) Stream removed, broadcasting: 1\nI0223 12:18:50.093192    2655 log.go:172] (0xc0005b22c0) (0xc0000ed680) Stream removed, broadcasting: 5\nI0223 12:18:50.093402    2655 log.go:172] (0xc0005b22c0) (0xc0000ed540) Stream removed, broadcasting: 1\nI0223 12:18:50.093436    2655 log.go:172] (0xc0005b22c0) (0xc0000ed5e0) Stream removed, broadcasting: 3\nI0223 12:18:50.093454    2655 log.go:172] (0xc0005b22c0) (0xc0000ed680) Stream removed, broadcasting: 5\n"
Feb 23 12:18:50.106: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 12:18:50.106: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 12:19:00.181: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:19:00.181: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 12:19:00.181: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 12:19:10.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:19:10.477: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 12:19:10.477: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 12:19:20.544: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:19:20.544: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 12:19:30.199: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 23 12:19:40.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cmg5x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 12:19:41.188: INFO: stderr: "I0223 12:19:40.462507    2677 log.go:172] (0xc000714370) (0xc000732640) Create stream\nI0223 12:19:40.463122    2677 log.go:172] (0xc000714370) (0xc000732640) Stream added, broadcasting: 1\nI0223 12:19:40.484134    2677 log.go:172] (0xc000714370) Reply frame received for 1\nI0223 12:19:40.484310    2677 log.go:172] (0xc000714370) (0xc0007326e0) Create stream\nI0223 12:19:40.484348    2677 log.go:172] (0xc000714370) (0xc0007326e0) Stream added, broadcasting: 3\nI0223 12:19:40.487114    2677 log.go:172] (0xc000714370) Reply frame received for 3\nI0223 12:19:40.487182    2677 log.go:172] (0xc000714370) (0xc0005c4dc0) Create stream\nI0223 12:19:40.487250    2677 log.go:172] (0xc000714370) (0xc0005c4dc0) Stream added, broadcasting: 5\nI0223 12:19:40.489426    2677 log.go:172] (0xc000714370) Reply frame received for 5\nI0223 12:19:40.860230    2677 log.go:172] (0xc000714370) Data frame received for 3\nI0223 12:19:40.860328    2677 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0223 12:19:40.860352    2677 log.go:172] (0xc0007326e0) (3) Data frame sent\nI0223 12:19:41.174986    2677 log.go:172] (0xc000714370) Data frame received for 1\nI0223 12:19:41.175143    2677 log.go:172] (0xc000714370) (0xc0007326e0) Stream removed, broadcasting: 3\nI0223 12:19:41.175236    2677 log.go:172] (0xc000732640) (1) Data frame handling\nI0223 12:19:41.175292    2677 log.go:172] (0xc000732640) (1) Data frame sent\nI0223 12:19:41.175333    2677 log.go:172] (0xc000714370) (0xc0005c4dc0) Stream removed, broadcasting: 5\nI0223 12:19:41.175395    2677 log.go:172] (0xc000714370) (0xc000732640) Stream removed, broadcasting: 1\nI0223 12:19:41.175423    2677 log.go:172] (0xc000714370) Go away received\nI0223 12:19:41.176174    2677 log.go:172] (0xc000714370) (0xc000732640) Stream removed, broadcasting: 1\nI0223 12:19:41.176193    2677 log.go:172] (0xc000714370) (0xc0007326e0) Stream removed, broadcasting: 3\nI0223 12:19:41.176205    2677 log.go:172] (0xc000714370) (0xc0005c4dc0) Stream removed, broadcasting: 5\n"
Feb 23 12:19:41.188: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 12:19:41.188: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 12:19:41.273: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 23 12:19:51.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cmg5x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 12:19:52.249: INFO: stderr: "I0223 12:19:51.836337    2699 log.go:172] (0xc00069c0b0) (0xc0006c2000) Create stream\nI0223 12:19:51.836955    2699 log.go:172] (0xc00069c0b0) (0xc0006c2000) Stream added, broadcasting: 1\nI0223 12:19:51.842496    2699 log.go:172] (0xc00069c0b0) Reply frame received for 1\nI0223 12:19:51.842535    2699 log.go:172] (0xc00069c0b0) (0xc00022ad20) Create stream\nI0223 12:19:51.842573    2699 log.go:172] (0xc00069c0b0) (0xc00022ad20) Stream added, broadcasting: 3\nI0223 12:19:51.844878    2699 log.go:172] (0xc00069c0b0) Reply frame received for 3\nI0223 12:19:51.844994    2699 log.go:172] (0xc00069c0b0) (0xc00022ae60) Create stream\nI0223 12:19:51.845007    2699 log.go:172] (0xc00069c0b0) (0xc00022ae60) Stream added, broadcasting: 5\nI0223 12:19:51.846456    2699 log.go:172] (0xc00069c0b0) Reply frame received for 5\nI0223 12:19:52.030797    2699 log.go:172] (0xc00069c0b0) Data frame received for 3\nI0223 12:19:52.030979    2699 log.go:172] (0xc00022ad20) (3) Data frame handling\nI0223 12:19:52.031041    2699 log.go:172] (0xc00022ad20) (3) Data frame sent\nI0223 12:19:52.230702    2699 log.go:172] (0xc00069c0b0) Data frame received for 1\nI0223 12:19:52.230796    2699 log.go:172] (0xc0006c2000) (1) Data frame handling\nI0223 12:19:52.230844    2699 log.go:172] (0xc0006c2000) (1) Data frame sent\nI0223 12:19:52.230877    2699 log.go:172] (0xc00069c0b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0223 12:19:52.233866    2699 log.go:172] (0xc00069c0b0) (0xc00022ad20) Stream removed, broadcasting: 3\nI0223 12:19:52.234446    2699 log.go:172] (0xc00069c0b0) (0xc00022ae60) Stream removed, broadcasting: 5\nI0223 12:19:52.234569    2699 log.go:172] (0xc00069c0b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0223 12:19:52.234589    2699 log.go:172] (0xc00069c0b0) (0xc00022ad20) Stream removed, broadcasting: 3\nI0223 12:19:52.234595    2699 log.go:172] (0xc00069c0b0) (0xc00022ae60) Stream removed, broadcasting: 5\n"
Feb 23 12:19:52.249: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 12:19:52.249: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 12:20:02.306: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:20:02.307: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:02.307: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:12.333: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:20:12.333: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:12.333: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:22.357: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:20:22.358: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:22.358: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:32.327: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
Feb 23 12:20:32.327: INFO: Waiting for Pod e2e-tests-statefulset-cmg5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 23 12:20:42.727: INFO: Waiting for StatefulSet e2e-tests-statefulset-cmg5x/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 23 12:20:52.334: INFO: Deleting all statefulset in ns e2e-tests-statefulset-cmg5x
Feb 23 12:20:52.340: INFO: Scaling statefulset ss2 to 0
Feb 23 12:21:32.464: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 12:21:32.483: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:21:32.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-cmg5x" for this suite.
Feb 23 12:21:40.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:21:41.171: INFO: namespace: e2e-tests-statefulset-cmg5x, resource: bindings, ignored listing per whitelist
Feb 23 12:21:41.213: INFO: namespace e2e-tests-statefulset-cmg5x deletion completed in 8.545826847s

• [SLOW TEST:222.999 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:21:41.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0c13f748-5637-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:21:41.623: INFO: Waiting up to 5m0s for pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-msvxz" to be "success or failure"
Feb 23 12:21:41.765: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 142.234331ms
Feb 23 12:21:43.790: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167186416s
Feb 23 12:21:45.802: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179360345s
Feb 23 12:21:47.814: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190788743s
Feb 23 12:21:49.839: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216466491s
Feb 23 12:21:52.084: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460974998s
STEP: Saw pod success
Feb 23 12:21:52.084: INFO: Pod "pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:21:52.092: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 12:21:52.505: INFO: Waiting for pod pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:21:52.531: INFO: Pod pod-secrets-0c2b4308-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:21:52.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-msvxz" for this suite.
Feb 23 12:21:58.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:21:58.808: INFO: namespace: e2e-tests-secrets-msvxz, resource: bindings, ignored listing per whitelist
Feb 23 12:21:58.911: INFO: namespace e2e-tests-secrets-msvxz deletion completed in 6.363579501s
STEP: Destroying namespace "e2e-tests-secret-namespace-f5mkz" for this suite.
Feb 23 12:22:04.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:22:05.015: INFO: namespace: e2e-tests-secret-namespace-f5mkz, resource: bindings, ignored listing per whitelist
Feb 23 12:22:05.117: INFO: namespace e2e-tests-secret-namespace-f5mkz deletion completed in 6.205138824s

• [SLOW TEST:23.903 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:22:05.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1a5287f9-5637-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:22:05.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-lv6dj" to be "success or failure"
Feb 23 12:22:05.434: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 79.674629ms
Feb 23 12:22:07.498: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143224714s
Feb 23 12:22:09.522: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167544072s
Feb 23 12:22:11.551: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196882453s
Feb 23 12:22:13.582: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227893286s
Feb 23 12:22:15.609: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254366848s
STEP: Saw pod success
Feb 23 12:22:15.609: INFO: Pod "pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:22:15.615: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 23 12:22:15.681: INFO: Waiting for pod pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:22:16.493: INFO: Pod pod-configmaps-1a552abc-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:22:16.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lv6dj" for this suite.
Feb 23 12:22:22.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:22:22.980: INFO: namespace: e2e-tests-configmap-lv6dj, resource: bindings, ignored listing per whitelist
Feb 23 12:22:23.073: INFO: namespace e2e-tests-configmap-lv6dj deletion completed in 6.559687047s

• [SLOW TEST:17.956 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:22:23.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-2518f98a-5637-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:22:23.409: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-bpwhv" to be "success or failure"
Feb 23 12:22:23.416: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.998978ms
Feb 23 12:22:26.122: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712882795s
Feb 23 12:22:28.140: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.73104034s
Feb 23 12:22:30.514: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.104656288s
Feb 23 12:22:32.569: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.160287741s
Feb 23 12:22:34.582: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.173152784s
STEP: Saw pod success
Feb 23 12:22:34.582: INFO: Pod "pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:22:34.586: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 12:22:35.129: INFO: Waiting for pod pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:22:35.431: INFO: Pod pod-projected-configmaps-251af05c-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:22:35.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bpwhv" for this suite.
Feb 23 12:22:41.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:22:41.639: INFO: namespace: e2e-tests-projected-bpwhv, resource: bindings, ignored listing per whitelist
Feb 23 12:22:41.761: INFO: namespace e2e-tests-projected-bpwhv deletion completed in 6.318840234s

• [SLOW TEST:18.688 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:22:41.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-30320241-5637-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:22:42.029: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-szvtm" to be "success or failure"
Feb 23 12:22:42.181: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 151.755809ms
Feb 23 12:22:44.197: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1685006s
Feb 23 12:22:46.222: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192949457s
Feb 23 12:22:48.375: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346353545s
Feb 23 12:22:50.389: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360142246s
Feb 23 12:22:52.834: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.805497117s
STEP: Saw pod success
Feb 23 12:22:52.835: INFO: Pod "pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:22:52.851: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 12:22:53.460: INFO: Waiting for pod pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:22:53.473: INFO: Pod pod-projected-configmaps-3034aec2-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:22:53.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-szvtm" for this suite.
Feb 23 12:22:59.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:22:59.659: INFO: namespace: e2e-tests-projected-szvtm, resource: bindings, ignored listing per whitelist
Feb 23 12:22:59.692: INFO: namespace e2e-tests-projected-szvtm deletion completed in 6.204111958s

• [SLOW TEST:17.930 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:22:59.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 12:23:08.266: INFO: Waiting up to 5m0s for pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-pods-6482x" to be "success or failure"
Feb 23 12:23:08.301: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.569636ms
Feb 23 12:23:10.319: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052022826s
Feb 23 12:23:12.327: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060225503s
Feb 23 12:23:14.659: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391925191s
Feb 23 12:23:16.677: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410115477s
Feb 23 12:23:18.724: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.457378079s
STEP: Saw pod success
Feb 23 12:23:18.724: INFO: Pod "client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:23:18.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008 container env3cont: 
STEP: delete the pod
Feb 23 12:23:19.046: INFO: Waiting for pod client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:23:19.062: INFO: Pod client-envvars-3fd70f1c-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:23:19.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6482x" for this suite.
Feb 23 12:24:15.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:24:15.188: INFO: namespace: e2e-tests-pods-6482x, resource: bindings, ignored listing per whitelist
Feb 23 12:24:15.274: INFO: namespace e2e-tests-pods-6482x deletion completed in 56.202788167s

• [SLOW TEST:75.582 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:24:15.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 23 12:24:15.508: INFO: Waiting up to 5m0s for pod "pod-67eb28c7-5637-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-4splr" to be "success or failure"
Feb 23 12:24:15.514: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5257ms
Feb 23 12:24:17.533: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024956995s
Feb 23 12:24:19.551: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043428687s
Feb 23 12:24:22.756: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.247987446s
Feb 23 12:24:24.778: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.270092094s
Feb 23 12:24:26.788: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.280605197s
Feb 23 12:24:28.802: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.293875606s
Feb 23 12:24:30.845: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.33719088s
STEP: Saw pod success
Feb 23 12:24:30.845: INFO: Pod "pod-67eb28c7-5637-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:24:30.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-67eb28c7-5637-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:24:31.730: INFO: Waiting for pod pod-67eb28c7-5637-11ea-8363-0242ac110008 to disappear
Feb 23 12:24:31.757: INFO: Pod pod-67eb28c7-5637-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:24:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4splr" for this suite.
Feb 23 12:24:37.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:24:38.051: INFO: namespace: e2e-tests-emptydir-4splr, resource: bindings, ignored listing per whitelist
Feb 23 12:24:38.113: INFO: namespace e2e-tests-emptydir-4splr deletion completed in 6.345179988s

• [SLOW TEST:22.839 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:24:38.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 12:24:38.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:24:48.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vfk72" for this suite.
Feb 23 12:25:32.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:25:32.836: INFO: namespace: e2e-tests-pods-vfk72, resource: bindings, ignored listing per whitelist
Feb 23 12:25:32.836: INFO: namespace e2e-tests-pods-vfk72 deletion completed in 44.433072428s

• [SLOW TEST:54.722 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:25:32.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 23 12:25:53.480: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:25:53.500: INFO: Pod pod-with-poststart-http-hook still exists
Feb 23 12:25:55.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:25:55.512: INFO: Pod pod-with-poststart-http-hook still exists
Feb 23 12:25:57.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:25:57.518: INFO: Pod pod-with-poststart-http-hook still exists
Feb 23 12:25:59.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:25:59.510: INFO: Pod pod-with-poststart-http-hook still exists
Feb 23 12:26:01.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:26:01.517: INFO: Pod pod-with-poststart-http-hook still exists
Feb 23 12:26:03.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 23 12:26:03.514: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:26:03.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qjk44" for this suite.
Feb 23 12:26:27.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:26:27.752: INFO: namespace: e2e-tests-container-lifecycle-hook-qjk44, resource: bindings, ignored listing per whitelist
Feb 23 12:26:27.777: INFO: namespace e2e-tests-container-lifecycle-hook-qjk44 deletion completed in 24.254916714s

• [SLOW TEST:54.940 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:26:27.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-64xzq
Feb 23 12:26:38.272: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-64xzq
STEP: checking the pod's current state and verifying that restartCount is present
Feb 23 12:26:38.278: INFO: Initial restart count of pod liveness-http is 0
Feb 23 12:26:54.874: INFO: Restart count of pod e2e-tests-container-probe-64xzq/liveness-http is now 1 (16.596285987s elapsed)
Feb 23 12:27:15.449: INFO: Restart count of pod e2e-tests-container-probe-64xzq/liveness-http is now 2 (37.17112284s elapsed)
Feb 23 12:27:35.880: INFO: Restart count of pod e2e-tests-container-probe-64xzq/liveness-http is now 3 (57.602189484s elapsed)
Feb 23 12:27:56.154: INFO: Restart count of pod e2e-tests-container-probe-64xzq/liveness-http is now 4 (1m17.876116107s elapsed)
Feb 23 12:28:58.887: INFO: Restart count of pod e2e-tests-container-probe-64xzq/liveness-http is now 5 (2m20.609142233s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:28:58.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-64xzq" for this suite.
Feb 23 12:29:05.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:29:05.153: INFO: namespace: e2e-tests-container-probe-64xzq, resource: bindings, ignored listing per whitelist
Feb 23 12:29:05.377: INFO: namespace e2e-tests-container-probe-64xzq deletion completed in 6.343607027s

• [SLOW TEST:157.600 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:29:05.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:29:15.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-tjlwh" for this suite.
Feb 23 12:30:05.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:30:05.989: INFO: namespace: e2e-tests-kubelet-test-tjlwh, resource: bindings, ignored listing per whitelist
Feb 23 12:30:05.993: INFO: namespace e2e-tests-kubelet-test-tjlwh deletion completed in 50.283716703s

• [SLOW TEST:60.615 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:30:05.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:30:06.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-dtg82" to be "success or failure"
Feb 23 12:30:06.215: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209843ms
Feb 23 12:30:08.263: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0562463s
Feb 23 12:30:10.298: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091839048s
Feb 23 12:30:12.615: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408328872s
Feb 23 12:30:14.643: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436420165s
Feb 23 12:30:16.671: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.464280648s
STEP: Saw pod success
Feb 23 12:30:16.671: INFO: Pod "downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:30:16.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:30:16.804: INFO: Waiting for pod downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:30:16.811: INFO: Pod downwardapi-volume-38f1acdc-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:30:16.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dtg82" for this suite.
Feb 23 12:30:22.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:30:23.006: INFO: namespace: e2e-tests-downward-api-dtg82, resource: bindings, ignored listing per whitelist
Feb 23 12:30:23.017: INFO: namespace e2e-tests-downward-api-dtg82 deletion completed in 6.201561642s

• [SLOW TEST:17.023 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:30:23.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 23 12:30:23.208: INFO: Waiting up to 5m0s for pod "downward-api-43157714-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-nw82r" to be "success or failure"
Feb 23 12:30:23.224: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.878659ms
Feb 23 12:30:25.950: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741536811s
Feb 23 12:30:27.968: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.759741921s
Feb 23 12:30:30.011: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802845091s
Feb 23 12:30:32.040: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831940355s
Feb 23 12:30:34.062: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.853719755s
STEP: Saw pod success
Feb 23 12:30:34.062: INFO: Pod "downward-api-43157714-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:30:34.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-43157714-5638-11ea-8363-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 23 12:30:34.721: INFO: Waiting for pod downward-api-43157714-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:30:34.747: INFO: Pod downward-api-43157714-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:30:34.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nw82r" for this suite.
Feb 23 12:30:43.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:30:43.106: INFO: namespace: e2e-tests-downward-api-nw82r, resource: bindings, ignored listing per whitelist
Feb 23 12:30:43.200: INFO: namespace e2e-tests-downward-api-nw82r deletion completed in 8.439639959s

• [SLOW TEST:20.184 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:30:43.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-4f2905ae-5638-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:30:43.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-w5ndc" to be "success or failure"
Feb 23 12:30:43.527: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.338215ms
Feb 23 12:30:45.625: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123887957s
Feb 23 12:30:47.639: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137692399s
Feb 23 12:30:50.062: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560721045s
Feb 23 12:30:52.287: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785097772s
Feb 23 12:30:54.303: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.801531643s
STEP: Saw pod success
Feb 23 12:30:54.303: INFO: Pod "pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:30:54.307: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 12:30:54.412: INFO: Waiting for pod pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:30:54.428: INFO: Pod pod-projected-configmaps-4f2a884b-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:30:54.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w5ndc" for this suite.
Feb 23 12:31:02.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:31:02.726: INFO: namespace: e2e-tests-projected-w5ndc, resource: bindings, ignored listing per whitelist
Feb 23 12:31:02.886: INFO: namespace e2e-tests-projected-w5ndc deletion completed in 8.435165556s

• [SLOW TEST:19.685 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:31:02.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5ae2a1da-5638-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:31:17.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-grwnd" for this suite.
Feb 23 12:31:41.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:31:41.586: INFO: namespace: e2e-tests-configmap-grwnd, resource: bindings, ignored listing per whitelist
Feb 23 12:31:41.732: INFO: namespace e2e-tests-configmap-grwnd deletion completed in 24.32205987s

• [SLOW TEST:38.846 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:31:41.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 23 12:31:42.035: INFO: Waiting up to 5m0s for pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-var-expansion-824n4" to be "success or failure"
Feb 23 12:31:42.189: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 153.331252ms
Feb 23 12:31:44.210: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17466073s
Feb 23 12:31:46.226: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190410359s
Feb 23 12:31:48.426: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390496455s
Feb 23 12:31:50.440: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.404570843s
Feb 23 12:31:52.450: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.415095582s
STEP: Saw pod success
Feb 23 12:31:52.450: INFO: Pod "var-expansion-7211deb3-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:31:52.466: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7211deb3-5638-11ea-8363-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 23 12:31:53.157: INFO: Waiting for pod var-expansion-7211deb3-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:31:53.365: INFO: Pod var-expansion-7211deb3-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:31:53.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-824n4" for this suite.
Feb 23 12:32:01.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:32:01.599: INFO: namespace: e2e-tests-var-expansion-824n4, resource: bindings, ignored listing per whitelist
Feb 23 12:32:01.841: INFO: namespace e2e-tests-var-expansion-824n4 deletion completed in 8.450481892s

• [SLOW TEST:20.109 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:32:01.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 23 12:32:02.035: INFO: Waiting up to 5m0s for pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-kp5qm" to be "success or failure"
Feb 23 12:32:02.048: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.470596ms
Feb 23 12:32:04.078: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042058076s
Feb 23 12:32:06.092: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056189754s
Feb 23 12:32:08.238: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202948721s
Feb 23 12:32:10.250: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214934993s
Feb 23 12:32:12.605: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569592799s
STEP: Saw pod success
Feb 23 12:32:12.605: INFO: Pod "downward-api-7df9bf89-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:32:12.633: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7df9bf89-5638-11ea-8363-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 23 12:32:13.156: INFO: Waiting for pod downward-api-7df9bf89-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:32:13.204: INFO: Pod downward-api-7df9bf89-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:32:13.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kp5qm" for this suite.
Feb 23 12:32:21.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:32:21.508: INFO: namespace: e2e-tests-downward-api-kp5qm, resource: bindings, ignored listing per whitelist
Feb 23 12:32:21.528: INFO: namespace e2e-tests-downward-api-kp5qm deletion completed in 8.313670995s

• [SLOW TEST:19.686 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:32:21.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 23 12:32:21.946: INFO: Number of nodes with available pods: 0
Feb 23 12:32:21.946: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:22.976: INFO: Number of nodes with available pods: 0
Feb 23 12:32:22.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:24.020: INFO: Number of nodes with available pods: 0
Feb 23 12:32:24.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:24.986: INFO: Number of nodes with available pods: 0
Feb 23 12:32:24.986: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:25.968: INFO: Number of nodes with available pods: 0
Feb 23 12:32:25.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:26.969: INFO: Number of nodes with available pods: 0
Feb 23 12:32:26.969: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:27.967: INFO: Number of nodes with available pods: 0
Feb 23 12:32:27.967: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:28.971: INFO: Number of nodes with available pods: 0
Feb 23 12:32:28.971: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:29.981: INFO: Number of nodes with available pods: 1
Feb 23 12:32:29.981: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 23 12:32:30.091: INFO: Number of nodes with available pods: 0
Feb 23 12:32:30.091: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:31.659: INFO: Number of nodes with available pods: 0
Feb 23 12:32:31.659: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:32.164: INFO: Number of nodes with available pods: 0
Feb 23 12:32:32.164: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:33.151: INFO: Number of nodes with available pods: 0
Feb 23 12:32:33.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:34.576: INFO: Number of nodes with available pods: 0
Feb 23 12:32:34.576: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:35.154: INFO: Number of nodes with available pods: 0
Feb 23 12:32:35.154: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:37.332: INFO: Number of nodes with available pods: 0
Feb 23 12:32:37.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:38.239: INFO: Number of nodes with available pods: 0
Feb 23 12:32:38.239: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:39.156: INFO: Number of nodes with available pods: 0
Feb 23 12:32:39.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:40.115: INFO: Number of nodes with available pods: 0
Feb 23 12:32:40.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:32:41.114: INFO: Number of nodes with available pods: 1
Feb 23 12:32:41.114: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wq5kz, will wait for the garbage collector to delete the pods
Feb 23 12:32:41.205: INFO: Deleting DaemonSet.extensions daemon-set took: 22.401181ms
Feb 23 12:32:41.305: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.426193ms
Feb 23 12:32:52.796: INFO: Number of nodes with available pods: 0
Feb 23 12:32:52.796: INFO: Number of running nodes: 0, number of available pods: 0
Feb 23 12:32:52.810: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wq5kz/daemonsets","resourceVersion":"22646593"},"items":null}

Feb 23 12:32:52.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wq5kz/pods","resourceVersion":"22646593"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:32:52.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wq5kz" for this suite.
Feb 23 12:33:00.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:33:00.982: INFO: namespace: e2e-tests-daemonsets-wq5kz, resource: bindings, ignored listing per whitelist
Feb 23 12:33:01.216: INFO: namespace e2e-tests-daemonsets-wq5kz deletion completed in 8.378912042s

• [SLOW TEST:39.687 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:33:01.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a1639f05-5638-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:33:01.423: INFO: Waiting up to 5m0s for pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-tw8x7" to be "success or failure"
Feb 23 12:33:01.443: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.298843ms
Feb 23 12:33:03.473: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049758463s
Feb 23 12:33:05.492: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068635987s
Feb 23 12:33:08.130: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.706799374s
Feb 23 12:33:10.142: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718677538s
Feb 23 12:33:12.169: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.74527531s
STEP: Saw pod success
Feb 23 12:33:12.169: INFO: Pod "pod-secrets-a1649125-5638-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:33:12.176: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a1649125-5638-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 12:33:12.711: INFO: Waiting for pod pod-secrets-a1649125-5638-11ea-8363-0242ac110008 to disappear
Feb 23 12:33:12.766: INFO: Pod pod-secrets-a1649125-5638-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:33:12.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tw8x7" for this suite.
Feb 23 12:33:18.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:33:19.101: INFO: namespace: e2e-tests-secrets-tw8x7, resource: bindings, ignored listing per whitelist
Feb 23 12:33:19.148: INFO: namespace e2e-tests-secrets-tw8x7 deletion completed in 6.281268816s

• [SLOW TEST:17.932 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:33:19.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-z6mn2
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-z6mn2
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-z6mn2
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-z6mn2
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-z6mn2
Feb 23 12:33:29.701: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-z6mn2, name: ss-0, uid: ae885ee6-5638-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 23 12:33:32.600: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-z6mn2, name: ss-0, uid: ae885ee6-5638-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 23 12:33:32.660: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-z6mn2, name: ss-0, uid: ae885ee6-5638-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 23 12:33:32.683: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-z6mn2
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-z6mn2
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-z6mn2 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 23 12:33:45.868: INFO: Deleting all statefulset in ns e2e-tests-statefulset-z6mn2
Feb 23 12:33:45.882: INFO: Scaling statefulset ss to 0
Feb 23 12:33:55.969: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 12:33:55.976: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:33:56.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-z6mn2" for this suite.
Feb 23 12:34:04.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:34:04.212: INFO: namespace: e2e-tests-statefulset-z6mn2, resource: bindings, ignored listing per whitelist
Feb 23 12:34:04.281: INFO: namespace e2e-tests-statefulset-z6mn2 deletion completed in 8.263369539s

• [SLOW TEST:45.131 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:34:04.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 23 12:34:04.685: INFO: Number of nodes with available pods: 0
Feb 23 12:34:04.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:05.705: INFO: Number of nodes with available pods: 0
Feb 23 12:34:05.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:07.032: INFO: Number of nodes with available pods: 0
Feb 23 12:34:07.032: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:07.711: INFO: Number of nodes with available pods: 0
Feb 23 12:34:07.711: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:08.743: INFO: Number of nodes with available pods: 0
Feb 23 12:34:08.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:09.705: INFO: Number of nodes with available pods: 0
Feb 23 12:34:09.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:11.435: INFO: Number of nodes with available pods: 0
Feb 23 12:34:11.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:11.901: INFO: Number of nodes with available pods: 0
Feb 23 12:34:11.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:12.723: INFO: Number of nodes with available pods: 0
Feb 23 12:34:12.723: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:13.736: INFO: Number of nodes with available pods: 0
Feb 23 12:34:13.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:14.696: INFO: Number of nodes with available pods: 1
Feb 23 12:34:14.696: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 23 12:34:14.774: INFO: Number of nodes with available pods: 0
Feb 23 12:34:14.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:16.058: INFO: Number of nodes with available pods: 0
Feb 23 12:34:16.059: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:16.796: INFO: Number of nodes with available pods: 0
Feb 23 12:34:16.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:17.938: INFO: Number of nodes with available pods: 0
Feb 23 12:34:17.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:18.798: INFO: Number of nodes with available pods: 0
Feb 23 12:34:18.798: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:19.910: INFO: Number of nodes with available pods: 0
Feb 23 12:34:19.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:20.807: INFO: Number of nodes with available pods: 0
Feb 23 12:34:20.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:21.811: INFO: Number of nodes with available pods: 0
Feb 23 12:34:21.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:22.817: INFO: Number of nodes with available pods: 0
Feb 23 12:34:22.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:23.838: INFO: Number of nodes with available pods: 0
Feb 23 12:34:23.838: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:24.795: INFO: Number of nodes with available pods: 0
Feb 23 12:34:24.795: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:25.818: INFO: Number of nodes with available pods: 0
Feb 23 12:34:25.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:26.801: INFO: Number of nodes with available pods: 0
Feb 23 12:34:26.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:27.833: INFO: Number of nodes with available pods: 0
Feb 23 12:34:27.833: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:28.793: INFO: Number of nodes with available pods: 0
Feb 23 12:34:28.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:29.811: INFO: Number of nodes with available pods: 0
Feb 23 12:34:29.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:30.803: INFO: Number of nodes with available pods: 0
Feb 23 12:34:30.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:31.816: INFO: Number of nodes with available pods: 0
Feb 23 12:34:31.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:32.905: INFO: Number of nodes with available pods: 0
Feb 23 12:34:32.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:33.802: INFO: Number of nodes with available pods: 0
Feb 23 12:34:33.802: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:34.807: INFO: Number of nodes with available pods: 0
Feb 23 12:34:34.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:35.808: INFO: Number of nodes with available pods: 0
Feb 23 12:34:35.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:36.830: INFO: Number of nodes with available pods: 0
Feb 23 12:34:36.830: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:37.794: INFO: Number of nodes with available pods: 0
Feb 23 12:34:37.794: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:38.807: INFO: Number of nodes with available pods: 0
Feb 23 12:34:38.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:39.803: INFO: Number of nodes with available pods: 0
Feb 23 12:34:39.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:40.786: INFO: Number of nodes with available pods: 0
Feb 23 12:34:40.786: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:41.884: INFO: Number of nodes with available pods: 0
Feb 23 12:34:41.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:34:42.830: INFO: Number of nodes with available pods: 1
Feb 23 12:34:42.830: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2bmcp, will wait for the garbage collector to delete the pods
Feb 23 12:34:42.980: INFO: Deleting DaemonSet.extensions daemon-set took: 83.888511ms
Feb 23 12:34:43.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.552963ms
Feb 23 12:34:51.103: INFO: Number of nodes with available pods: 0
Feb 23 12:34:51.103: INFO: Number of running nodes: 0, number of available pods: 0
Feb 23 12:34:51.108: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2bmcp/daemonsets","resourceVersion":"22646940"},"items":null}

Feb 23 12:34:51.112: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2bmcp/pods","resourceVersion":"22646940"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:34:51.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2bmcp" for this suite.
Feb 23 12:34:57.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:34:57.381: INFO: namespace: e2e-tests-daemonsets-2bmcp, resource: bindings, ignored listing per whitelist
Feb 23 12:34:57.504: INFO: namespace e2e-tests-daemonsets-2bmcp deletion completed in 6.366807723s

• [SLOW TEST:53.223 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:34:57.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0223 12:35:00.293427       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 23 12:35:00.293: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:35:00.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-gzsth" for this suite.
Feb 23 12:35:06.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:35:06.495: INFO: namespace: e2e-tests-gc-gzsth, resource: bindings, ignored listing per whitelist
Feb 23 12:35:06.525: INFO: namespace e2e-tests-gc-gzsth deletion completed in 6.227445594s

• [SLOW TEST:9.021 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:35:06.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 23 12:35:17.685: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ec320a13-5638-11ea-8363-0242ac110008"
Feb 23 12:35:17.685: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ec320a13-5638-11ea-8363-0242ac110008" in namespace "e2e-tests-pods-j8jg7" to be "terminated due to deadline exceeded"
Feb 23 12:35:17.748: INFO: Pod "pod-update-activedeadlineseconds-ec320a13-5638-11ea-8363-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 62.108809ms
Feb 23 12:35:19.763: INFO: Pod "pod-update-activedeadlineseconds-ec320a13-5638-11ea-8363-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.077952787s
Feb 23 12:35:19.764: INFO: Pod "pod-update-activedeadlineseconds-ec320a13-5638-11ea-8363-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:35:19.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-j8jg7" for this suite.
Feb 23 12:35:25.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:35:25.944: INFO: namespace: e2e-tests-pods-j8jg7, resource: bindings, ignored listing per whitelist
Feb 23 12:35:26.010: INFO: namespace e2e-tests-pods-j8jg7 deletion completed in 6.236833352s

• [SLOW TEST:19.484 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:35:26.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 12:35:26.218: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 23 12:35:26.239: INFO: Number of nodes with available pods: 0
Feb 23 12:35:26.239: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:27.263: INFO: Number of nodes with available pods: 0
Feb 23 12:35:27.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:28.341: INFO: Number of nodes with available pods: 0
Feb 23 12:35:28.341: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:29.275: INFO: Number of nodes with available pods: 0
Feb 23 12:35:29.275: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:30.340: INFO: Number of nodes with available pods: 0
Feb 23 12:35:30.340: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:32.376: INFO: Number of nodes with available pods: 0
Feb 23 12:35:32.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:33.343: INFO: Number of nodes with available pods: 0
Feb 23 12:35:33.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:34.281: INFO: Number of nodes with available pods: 0
Feb 23 12:35:34.282: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:35.263: INFO: Number of nodes with available pods: 1
Feb 23 12:35:35.263: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 23 12:35:35.404: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:36.508: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:37.494: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:38.505: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:39.491: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:40.536: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:41.884: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:41.884: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:42.518: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:42.518: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:43.486: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:43.486: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:44.507: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:44.507: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:45.493: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:45.493: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:47.422: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:47.423: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:47.480: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:47.480: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:48.495: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:48.496: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:49.486: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:49.487: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:50.715: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:50.715: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:51.491: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:51.491: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:52.550: INFO: Wrong image for pod: daemon-set-kkc5z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 23 12:35:52.550: INFO: Pod daemon-set-kkc5z is not available
Feb 23 12:35:54.595: INFO: Pod daemon-set-xc676 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 23 12:35:55.182: INFO: Number of nodes with available pods: 0
Feb 23 12:35:55.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:56.214: INFO: Number of nodes with available pods: 0
Feb 23 12:35:56.214: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:57.268: INFO: Number of nodes with available pods: 0
Feb 23 12:35:57.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:58.215: INFO: Number of nodes with available pods: 0
Feb 23 12:35:58.215: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:35:59.696: INFO: Number of nodes with available pods: 0
Feb 23 12:35:59.696: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:36:00.214: INFO: Number of nodes with available pods: 0
Feb 23 12:36:00.214: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:36:01.215: INFO: Number of nodes with available pods: 0
Feb 23 12:36:01.215: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:36:02.199: INFO: Number of nodes with available pods: 0
Feb 23 12:36:02.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 12:36:03.210: INFO: Number of nodes with available pods: 1
Feb 23 12:36:03.210: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vt5cz, will wait for the garbage collector to delete the pods
Feb 23 12:36:03.316: INFO: Deleting DaemonSet.extensions daemon-set took: 24.900174ms
Feb 23 12:36:03.417: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.529159ms
Feb 23 12:36:11.076: INFO: Number of nodes with available pods: 0
Feb 23 12:36:11.076: INFO: Number of running nodes: 0, number of available pods: 0
Feb 23 12:36:11.086: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vt5cz/daemonsets","resourceVersion":"22647162"},"items":null}

Feb 23 12:36:11.091: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vt5cz/pods","resourceVersion":"22647162"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:36:11.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vt5cz" for this suite.
Feb 23 12:36:17.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:36:17.212: INFO: namespace: e2e-tests-daemonsets-vt5cz, resource: bindings, ignored listing per whitelist
Feb 23 12:36:17.306: INFO: namespace e2e-tests-daemonsets-vt5cz deletion completed in 6.197909433s

• [SLOW TEST:51.295 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:36:17.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:36:17.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-lvlnt" to be "success or failure"
Feb 23 12:36:17.539: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.528026ms
Feb 23 12:36:19.566: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050686466s
Feb 23 12:36:21.582: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06688139s
Feb 23 12:36:24.170: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654619726s
Feb 23 12:36:26.229: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714142385s
Feb 23 12:36:28.244: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.729217173s
STEP: Saw pod success
Feb 23 12:36:28.245: INFO: Pod "downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:36:28.253: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:36:29.405: INFO: Waiting for pod downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:36:29.422: INFO: Pod downwardapi-volume-164515ec-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:36:29.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lvlnt" for this suite.
Feb 23 12:36:35.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:36:35.700: INFO: namespace: e2e-tests-downward-api-lvlnt, resource: bindings, ignored listing per whitelist
Feb 23 12:36:35.700: INFO: namespace e2e-tests-downward-api-lvlnt deletion completed in 6.264509956s

• [SLOW TEST:18.394 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:36:35.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 23 12:36:35.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:38.332: INFO: stderr: ""
Feb 23 12:36:38.332: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 12:36:38.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:38.649: INFO: stderr: ""
Feb 23 12:36:38.649: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
Feb 23 12:36:38.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkd7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:39.000: INFO: stderr: ""
Feb 23 12:36:39.000: INFO: stdout: ""
Feb 23 12:36:39.000: INFO: update-demo-nautilus-dkd7r is created but not running
Feb 23 12:36:44.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:44.290: INFO: stderr: ""
Feb 23 12:36:44.290: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
Feb 23 12:36:44.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkd7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:44.501: INFO: stderr: ""
Feb 23 12:36:44.501: INFO: stdout: ""
Feb 23 12:36:44.501: INFO: update-demo-nautilus-dkd7r is created but not running
Feb 23 12:36:49.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:49.889: INFO: stderr: ""
Feb 23 12:36:49.889: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
Feb 23 12:36:49.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkd7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:50.061: INFO: stderr: ""
Feb 23 12:36:50.061: INFO: stdout: ""
Feb 23 12:36:50.061: INFO: update-demo-nautilus-dkd7r is created but not running
Feb 23 12:36:55.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:55.201: INFO: stderr: ""
Feb 23 12:36:55.201: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
Feb 23 12:36:55.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkd7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:55.351: INFO: stderr: ""
Feb 23 12:36:55.351: INFO: stdout: "true"
Feb 23 12:36:55.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkd7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:55.505: INFO: stderr: ""
Feb 23 12:36:55.505: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:36:55.505: INFO: validating pod update-demo-nautilus-dkd7r
Feb 23 12:36:55.537: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:36:55.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:36:55.537: INFO: update-demo-nautilus-dkd7r is verified up and running
Feb 23 12:36:55.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:55.668: INFO: stderr: ""
Feb 23 12:36:55.668: INFO: stdout: "true"
Feb 23 12:36:55.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:55.797: INFO: stderr: ""
Feb 23 12:36:55.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:36:55.797: INFO: validating pod update-demo-nautilus-mz5v5
Feb 23 12:36:55.811: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:36:55.811: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:36:55.811: INFO: update-demo-nautilus-mz5v5 is verified up and running
STEP: scaling down the replication controller
Feb 23 12:36:55.814: INFO: scanned /root for discovery docs: 
Feb 23 12:36:55.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:57.397: INFO: stderr: ""
Feb 23 12:36:57.397: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 12:36:57.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:36:57.640: INFO: stderr: ""
Feb 23 12:36:57.640: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 23 12:37:02.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:02.778: INFO: stderr: ""
Feb 23 12:37:02.778: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 23 12:37:07.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:08.062: INFO: stderr: ""
Feb 23 12:37:08.062: INFO: stdout: "update-demo-nautilus-dkd7r update-demo-nautilus-mz5v5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 23 12:37:13.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:13.291: INFO: stderr: ""
Feb 23 12:37:13.291: INFO: stdout: "update-demo-nautilus-mz5v5 "
Feb 23 12:37:13.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:13.410: INFO: stderr: ""
Feb 23 12:37:13.410: INFO: stdout: "true"
Feb 23 12:37:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:13.519: INFO: stderr: ""
Feb 23 12:37:13.519: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:37:13.519: INFO: validating pod update-demo-nautilus-mz5v5
Feb 23 12:37:13.529: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:37:13.529: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:37:13.529: INFO: update-demo-nautilus-mz5v5 is verified up and running
STEP: scaling up the replication controller
Feb 23 12:37:13.532: INFO: scanned /root for discovery docs: 
Feb 23 12:37:13.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:14.965: INFO: stderr: ""
Feb 23 12:37:14.965: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 23 12:37:14.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:15.596: INFO: stderr: ""
Feb 23 12:37:15.596: INFO: stdout: "update-demo-nautilus-kv7kf update-demo-nautilus-mz5v5 "
Feb 23 12:37:15.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv7kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:16.133: INFO: stderr: ""
Feb 23 12:37:16.133: INFO: stdout: ""
Feb 23 12:37:16.133: INFO: update-demo-nautilus-kv7kf is created but not running
Feb 23 12:37:21.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:21.264: INFO: stderr: ""
Feb 23 12:37:21.264: INFO: stdout: "update-demo-nautilus-kv7kf update-demo-nautilus-mz5v5 "
Feb 23 12:37:21.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv7kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:21.368: INFO: stderr: ""
Feb 23 12:37:21.368: INFO: stdout: ""
Feb 23 12:37:21.368: INFO: update-demo-nautilus-kv7kf is created but not running
Feb 23 12:37:26.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:26.598: INFO: stderr: ""
Feb 23 12:37:26.598: INFO: stdout: "update-demo-nautilus-kv7kf update-demo-nautilus-mz5v5 "
Feb 23 12:37:26.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv7kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:26.798: INFO: stderr: ""
Feb 23 12:37:26.798: INFO: stdout: "true"
Feb 23 12:37:26.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv7kf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:26.971: INFO: stderr: ""
Feb 23 12:37:26.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:37:26.971: INFO: validating pod update-demo-nautilus-kv7kf
Feb 23 12:37:26.985: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:37:26.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:37:26.985: INFO: update-demo-nautilus-kv7kf is verified up and running
Feb 23 12:37:26.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:27.182: INFO: stderr: ""
Feb 23 12:37:27.183: INFO: stdout: "true"
Feb 23 12:37:27.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz5v5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:27.284: INFO: stderr: ""
Feb 23 12:37:27.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 23 12:37:27.285: INFO: validating pod update-demo-nautilus-mz5v5
Feb 23 12:37:27.293: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 23 12:37:27.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 23 12:37:27.293: INFO: update-demo-nautilus-mz5v5 is verified up and running
STEP: using delete to clean up resources
Feb 23 12:37:27.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:27.431: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 23 12:37:27.431: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 23 12:37:27.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4f24n'
Feb 23 12:37:27.648: INFO: stderr: "No resources found.\n"
Feb 23 12:37:27.648: INFO: stdout: ""
Feb 23 12:37:27.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4f24n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 23 12:37:27.808: INFO: stderr: ""
Feb 23 12:37:27.808: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:37:27.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4f24n" for this suite.
Feb 23 12:37:55.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:37:55.995: INFO: namespace: e2e-tests-kubectl-4f24n, resource: bindings, ignored listing per whitelist
Feb 23 12:37:56.040: INFO: namespace e2e-tests-kubectl-4f24n deletion completed in 28.210281s

• [SLOW TEST:80.340 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:37:56.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 23 12:38:04.302: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-51159dce-5639-11ea-8363-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-x9qff", SelfLink:"/api/v1/namespaces/e2e-tests-pods-x9qff/pods/pod-submit-remove-51159dce-5639-11ea-8363-0242ac110008", UID:"5117c5dd-5639-11ea-a994-fa163e34d433", ResourceVersion:"22647427", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718058276, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"171739954", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f8fh8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fd9880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f8fh8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001000b88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cfdec0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001000c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001000d90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001000d98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001000d9c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718058276, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718058284, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718058284, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718058276, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000fe2400), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000fe2420), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://9b72ae3375a16c32a8eb5dc388ddb21b58c08d5a13ead5ff45701b0d18868f01"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:38:22.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-x9qff" for this suite.
Feb 23 12:38:28.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:38:29.012: INFO: namespace: e2e-tests-pods-x9qff, resource: bindings, ignored listing per whitelist
Feb 23 12:38:29.027: INFO: namespace e2e-tests-pods-x9qff deletion completed in 6.211031693s

• [SLOW TEST:32.986 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:38:29.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:38:35.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-h8b9x" for this suite.
Feb 23 12:38:41.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:38:42.028: INFO: namespace: e2e-tests-namespaces-h8b9x, resource: bindings, ignored listing per whitelist
Feb 23 12:38:42.115: INFO: namespace e2e-tests-namespaces-h8b9x deletion completed in 6.330898388s
STEP: Destroying namespace "e2e-tests-nsdeletetest-hjq5j" for this suite.
Feb 23 12:38:42.117: INFO: Namespace e2e-tests-nsdeletetest-hjq5j was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-2dztv" for this suite.
Feb 23 12:38:48.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:38:48.368: INFO: namespace: e2e-tests-nsdeletetest-2dztv, resource: bindings, ignored listing per whitelist
Feb 23 12:38:48.443: INFO: namespace e2e-tests-nsdeletetest-2dztv deletion completed in 6.325678362s

• [SLOW TEST:19.415 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:38:48.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 23 12:38:48.793: INFO: Waiting up to 5m0s for pod "pod-70691f10-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-c9xzg" to be "success or failure"
Feb 23 12:38:48.806: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791595ms
Feb 23 12:38:50.835: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041015274s
Feb 23 12:38:52.857: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063594384s
Feb 23 12:38:55.051: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257275319s
Feb 23 12:38:57.069: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275115021s
Feb 23 12:38:59.085: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.291136417s
STEP: Saw pod success
Feb 23 12:38:59.085: INFO: Pod "pod-70691f10-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:38:59.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-70691f10-5639-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:38:59.150: INFO: Waiting for pod pod-70691f10-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:38:59.158: INFO: Pod pod-70691f10-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:38:59.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c9xzg" for this suite.
Feb 23 12:39:05.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:39:05.462: INFO: namespace: e2e-tests-emptydir-c9xzg, resource: bindings, ignored listing per whitelist
Feb 23 12:39:05.467: INFO: namespace e2e-tests-emptydir-c9xzg deletion completed in 6.303358876s

• [SLOW TEST:17.023 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:39:05.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:39:05.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-phfhw" to be "success or failure"
Feb 23 12:39:05.677: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.122242ms
Feb 23 12:39:07.698: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061157443s
Feb 23 12:39:09.719: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081861119s
Feb 23 12:39:11.760: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123363867s
Feb 23 12:39:13.859: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222206458s
Feb 23 12:39:15.958: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32129411s
STEP: Saw pod success
Feb 23 12:39:15.958: INFO: Pod "downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:39:16.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:39:16.381: INFO: Waiting for pod downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:39:16.454: INFO: Pod downwardapi-volume-7a79a71c-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:39:16.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-phfhw" for this suite.
Feb 23 12:39:22.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:39:22.669: INFO: namespace: e2e-tests-downward-api-phfhw, resource: bindings, ignored listing per whitelist
Feb 23 12:39:22.790: INFO: namespace e2e-tests-downward-api-phfhw deletion completed in 6.325281288s

• [SLOW TEST:17.323 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:39:22.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-84d9f950-5639-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:39:23.104: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-msgq4" to be "success or failure"
Feb 23 12:39:23.150: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 45.522406ms
Feb 23 12:39:25.175: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070298577s
Feb 23 12:39:27.186: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08165019s
Feb 23 12:39:29.196: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09162963s
Feb 23 12:39:31.210: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105916436s
Feb 23 12:39:33.280: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176039732s
STEP: Saw pod success
Feb 23 12:39:33.280: INFO: Pod "pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:39:33.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 12:39:34.353: INFO: Waiting for pod pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:39:34.545: INFO: Pod pod-projected-secrets-84e14343-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:39:34.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-msgq4" for this suite.
Feb 23 12:39:40.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:39:40.777: INFO: namespace: e2e-tests-projected-msgq4, resource: bindings, ignored listing per whitelist
Feb 23 12:39:40.795: INFO: namespace e2e-tests-projected-msgq4 deletion completed in 6.233773677s

• [SLOW TEST:18.005 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:39:40.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-5tttd/secret-test-8f8ad024-5639-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:39:40.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-5tttd" to be "success or failure"
Feb 23 12:39:40.996: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.3619ms
Feb 23 12:39:43.009: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035040217s
Feb 23 12:39:45.018: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044033878s
Feb 23 12:39:48.481: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.506704487s
Feb 23 12:39:50.498: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.523485686s
Feb 23 12:39:52.527: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.553140382s
STEP: Saw pod success
Feb 23 12:39:52.528: INFO: Pod "pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:39:52.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008 container env-test: 
STEP: delete the pod
Feb 23 12:39:52.765: INFO: Waiting for pod pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:39:52.774: INFO: Pod pod-configmaps-8f8b53ac-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:39:52.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5tttd" for this suite.
Feb 23 12:39:58.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:39:59.040: INFO: namespace: e2e-tests-secrets-5tttd, resource: bindings, ignored listing per whitelist
Feb 23 12:39:59.151: INFO: namespace e2e-tests-secrets-5tttd deletion completed in 6.368613181s

• [SLOW TEST:18.355 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:39:59.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-9a7c7a68-5639-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:39:59.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-ltktm" to be "success or failure"
Feb 23 12:39:59.356: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155313ms
Feb 23 12:40:01.634: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285985787s
Feb 23 12:40:03.662: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314147515s
Feb 23 12:40:05.838: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489880251s
Feb 23 12:40:08.757: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.408196918s
Feb 23 12:40:10.775: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.426309621s
Feb 23 12:40:12.807: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.458688525s
STEP: Saw pod success
Feb 23 12:40:12.807: INFO: Pod "pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:40:12.813: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 23 12:40:13.061: INFO: Waiting for pod pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:40:13.071: INFO: Pod pod-configmaps-9a7dfd1b-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:40:13.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ltktm" for this suite.
Feb 23 12:40:19.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:40:19.476: INFO: namespace: e2e-tests-configmap-ltktm, resource: bindings, ignored listing per whitelist
Feb 23 12:40:19.478: INFO: namespace e2e-tests-configmap-ltktm deletion completed in 6.370005675s

• [SLOW TEST:20.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:40:19.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a69bbabb-5639-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:40:19.678: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-pt8q5" to be "success or failure"
Feb 23 12:40:19.753: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 75.069178ms
Feb 23 12:40:21.823: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145073053s
Feb 23 12:40:23.843: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164677349s
Feb 23 12:40:25.882: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204052281s
Feb 23 12:40:28.025: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346529996s
Feb 23 12:40:30.583: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.904852s
STEP: Saw pod success
Feb 23 12:40:30.583: INFO: Pod "pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:40:30.606: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 23 12:40:31.038: INFO: Waiting for pod pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:40:31.093: INFO: Pod pod-projected-secrets-a69cfca4-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:40:31.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pt8q5" for this suite.
Feb 23 12:40:37.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:40:37.204: INFO: namespace: e2e-tests-projected-pt8q5, resource: bindings, ignored listing per whitelist
Feb 23 12:40:37.245: INFO: namespace e2e-tests-projected-pt8q5 deletion completed in 6.14155814s

• [SLOW TEST:17.767 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:40:37.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b1321721-5639-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:40:37.437: INFO: Waiting up to 5m0s for pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-5kxsz" to be "success or failure"
Feb 23 12:40:37.464: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.912979ms
Feb 23 12:40:39.938: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501101351s
Feb 23 12:40:41.959: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522429398s
Feb 23 12:40:43.972: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535825714s
Feb 23 12:40:45.987: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550850688s
Feb 23 12:40:48.064: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.62721128s
STEP: Saw pod success
Feb 23 12:40:48.064: INFO: Pod "pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:40:48.103: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 12:40:48.216: INFO: Waiting for pod pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:40:48.234: INFO: Pod pod-secrets-b1331ebd-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:40:48.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5kxsz" for this suite.
Feb 23 12:40:54.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:40:54.396: INFO: namespace: e2e-tests-secrets-5kxsz, resource: bindings, ignored listing per whitelist
Feb 23 12:40:54.459: INFO: namespace e2e-tests-secrets-5kxsz deletion completed in 6.215138404s

• [SLOW TEST:17.214 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:40:54.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 23 12:40:54.718: INFO: Waiting up to 5m0s for pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-h67cs" to be "success or failure"
Feb 23 12:40:54.737: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.534624ms
Feb 23 12:40:56.750: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032499003s
Feb 23 12:40:58.785: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067097289s
Feb 23 12:41:00.860: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141911606s
Feb 23 12:41:02.891: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173095431s
Feb 23 12:41:04.913: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195049907s
STEP: Saw pod success
Feb 23 12:41:04.913: INFO: Pod "pod-bb7f5e2b-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:41:04.937: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bb7f5e2b-5639-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:41:05.237: INFO: Waiting for pod pod-bb7f5e2b-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:41:05.251: INFO: Pod pod-bb7f5e2b-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:41:05.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h67cs" for this suite.
Feb 23 12:41:11.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:41:11.345: INFO: namespace: e2e-tests-emptydir-h67cs, resource: bindings, ignored listing per whitelist
Feb 23 12:41:11.462: INFO: namespace e2e-tests-emptydir-h67cs deletion completed in 6.204469181s

• [SLOW TEST:17.002 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:41:11.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb 23 12:41:11.771: INFO: Waiting up to 5m0s for pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008" in namespace "e2e-tests-containers-b82s2" to be "success or failure"
Feb 23 12:41:11.786: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.629667ms
Feb 23 12:41:14.379: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608011016s
Feb 23 12:41:16.393: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622033697s
Feb 23 12:41:18.408: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637580681s
Feb 23 12:41:20.748: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.97712981s
Feb 23 12:41:22.763: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.992403209s
STEP: Saw pod success
Feb 23 12:41:22.764: INFO: Pod "client-containers-c5a12d2a-5639-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:41:22.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c5a12d2a-5639-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:41:24.161: INFO: Waiting for pod client-containers-c5a12d2a-5639-11ea-8363-0242ac110008 to disappear
Feb 23 12:41:24.173: INFO: Pod client-containers-c5a12d2a-5639-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:41:24.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-b82s2" for this suite.
Feb 23 12:41:30.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:41:30.370: INFO: namespace: e2e-tests-containers-b82s2, resource: bindings, ignored listing per whitelist
Feb 23 12:41:30.415: INFO: namespace e2e-tests-containers-b82s2 deletion completed in 6.230945327s

• [SLOW TEST:18.953 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:41:30.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 23 12:41:38.105: INFO: 10 pods remaining
Feb 23 12:41:38.105: INFO: 7 pods has nil DeletionTimestamp
Feb 23 12:41:38.105: INFO: 
STEP: Gathering metrics
W0223 12:41:39.102174       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 23 12:41:39.102: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:41:39.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jbbqj" for this suite.
Feb 23 12:41:55.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:41:55.263: INFO: namespace: e2e-tests-gc-jbbqj, resource: bindings, ignored listing per whitelist
Feb 23 12:41:55.317: INFO: namespace e2e-tests-gc-jbbqj deletion completed in 16.211173416s

• [SLOW TEST:24.901 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:41:55.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-m56c
STEP: Creating a pod to test atomic-volume-subpath
Feb 23 12:41:56.179: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m56c" in namespace "e2e-tests-subpath-s8599" to be "success or failure"
Feb 23 12:41:56.232: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.091771ms
Feb 23 12:41:58.265: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085593953s
Feb 23 12:42:00.276: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09652175s
Feb 23 12:42:02.354: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175127702s
Feb 23 12:42:04.369: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189863754s
Feb 23 12:42:06.380: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200915352s
Feb 23 12:42:08.392: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213332798s
Feb 23 12:42:11.932: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.752631268s
Feb 23 12:42:14.574: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.395256993s
Feb 23 12:42:16.606: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 20.426784127s
Feb 23 12:42:18.634: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 22.455180841s
Feb 23 12:42:20.713: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 24.53427197s
Feb 23 12:42:22.732: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 26.552602162s
Feb 23 12:42:24.745: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 28.56601722s
Feb 23 12:42:26.761: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 30.582111078s
Feb 23 12:42:28.787: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 32.60835419s
Feb 23 12:42:30.806: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Running", Reason="", readiness=false. Elapsed: 34.626809776s
Feb 23 12:42:33.109: INFO: Pod "pod-subpath-test-configmap-m56c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.929697702s
STEP: Saw pod success
Feb 23 12:42:33.109: INFO: Pod "pod-subpath-test-configmap-m56c" satisfied condition "success or failure"
Feb 23 12:42:33.190: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-m56c container test-container-subpath-configmap-m56c: 
STEP: delete the pod
Feb 23 12:42:33.430: INFO: Waiting for pod pod-subpath-test-configmap-m56c to disappear
Feb 23 12:42:33.445: INFO: Pod pod-subpath-test-configmap-m56c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-m56c
Feb 23 12:42:33.445: INFO: Deleting pod "pod-subpath-test-configmap-m56c" in namespace "e2e-tests-subpath-s8599"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:42:33.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-s8599" for this suite.
Feb 23 12:42:39.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:42:39.704: INFO: namespace: e2e-tests-subpath-s8599, resource: bindings, ignored listing per whitelist
Feb 23 12:42:39.800: INFO: namespace e2e-tests-subpath-s8599 deletion completed in 6.341464081s

• [SLOW TEST:44.483 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:42:39.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-fa4e1aa6-5639-11ea-8363-0242ac110008
STEP: Creating secret with name s-test-opt-upd-fa4e1b57-5639-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fa4e1aa6-5639-11ea-8363-0242ac110008
STEP: Updating secret s-test-opt-upd-fa4e1b57-5639-11ea-8363-0242ac110008
STEP: Creating secret with name s-test-opt-create-fa4e1ba2-5639-11ea-8363-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:44:19.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hzwb7" for this suite.
Feb 23 12:44:43.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:44:43.745: INFO: namespace: e2e-tests-secrets-hzwb7, resource: bindings, ignored listing per whitelist
Feb 23 12:44:43.866: INFO: namespace e2e-tests-secrets-hzwb7 deletion completed in 24.233662755s

• [SLOW TEST:124.066 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:44:43.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-4453942d-563a-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:44:44.304: INFO: Waiting up to 5m0s for pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-tj4j4" to be "success or failure"
Feb 23 12:44:44.323: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.607816ms
Feb 23 12:44:46.519: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215171678s
Feb 23 12:44:48.543: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238930356s
Feb 23 12:44:50.563: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259038181s
Feb 23 12:44:52.598: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293789737s
Feb 23 12:44:54.637: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332280023s
STEP: Saw pod success
Feb 23 12:44:54.637: INFO: Pod "pod-configmaps-44553638-563a-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:44:54.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-44553638-563a-11ea-8363-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 23 12:44:55.063: INFO: Waiting for pod pod-configmaps-44553638-563a-11ea-8363-0242ac110008 to disappear
Feb 23 12:44:55.074: INFO: Pod pod-configmaps-44553638-563a-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:44:55.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tj4j4" for this suite.
Feb 23 12:45:01.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:45:01.328: INFO: namespace: e2e-tests-configmap-tj4j4, resource: bindings, ignored listing per whitelist
Feb 23 12:45:01.375: INFO: namespace e2e-tests-configmap-tj4j4 deletion completed in 6.28748884s

• [SLOW TEST:17.509 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:45:01.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 23 12:45:01.562: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:45:28.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bzzv7" for this suite.
Feb 23 12:45:52.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:45:52.954: INFO: namespace: e2e-tests-init-container-bzzv7, resource: bindings, ignored listing per whitelist
Feb 23 12:45:53.094: INFO: namespace e2e-tests-init-container-bzzv7 deletion completed in 24.380989081s

• [SLOW TEST:51.718 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:45:53.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 23 12:46:08.333: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:46:09.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qdvvs" for this suite.
Feb 23 12:46:36.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:46:37.137: INFO: namespace: e2e-tests-replicaset-qdvvs, resource: bindings, ignored listing per whitelist
Feb 23 12:46:37.241: INFO: namespace e2e-tests-replicaset-qdvvs deletion completed in 27.36077038s

• [SLOW TEST:44.147 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:46:37.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 23 12:47:01.673: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:01.673: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:01.842190       8 log.go:172] (0xc0000fd1e0) (0xc0003ae320) Create stream
I0223 12:47:01.842343       8 log.go:172] (0xc0000fd1e0) (0xc0003ae320) Stream added, broadcasting: 1
I0223 12:47:01.855784       8 log.go:172] (0xc0000fd1e0) Reply frame received for 1
I0223 12:47:01.855844       8 log.go:172] (0xc0000fd1e0) (0xc001ab10e0) Create stream
I0223 12:47:01.855850       8 log.go:172] (0xc0000fd1e0) (0xc001ab10e0) Stream added, broadcasting: 3
I0223 12:47:01.858055       8 log.go:172] (0xc0000fd1e0) Reply frame received for 3
I0223 12:47:01.858122       8 log.go:172] (0xc0000fd1e0) (0xc0020d83c0) Create stream
I0223 12:47:01.858131       8 log.go:172] (0xc0000fd1e0) (0xc0020d83c0) Stream added, broadcasting: 5
I0223 12:47:01.859177       8 log.go:172] (0xc0000fd1e0) Reply frame received for 5
I0223 12:47:02.085075       8 log.go:172] (0xc0000fd1e0) Data frame received for 3
I0223 12:47:02.085216       8 log.go:172] (0xc001ab10e0) (3) Data frame handling
I0223 12:47:02.085246       8 log.go:172] (0xc001ab10e0) (3) Data frame sent
I0223 12:47:02.273008       8 log.go:172] (0xc0000fd1e0) Data frame received for 1
I0223 12:47:02.273070       8 log.go:172] (0xc0000fd1e0) (0xc0020d83c0) Stream removed, broadcasting: 5
I0223 12:47:02.273102       8 log.go:172] (0xc0003ae320) (1) Data frame handling
I0223 12:47:02.273125       8 log.go:172] (0xc0003ae320) (1) Data frame sent
I0223 12:47:02.273151       8 log.go:172] (0xc0000fd1e0) (0xc001ab10e0) Stream removed, broadcasting: 3
I0223 12:47:02.273176       8 log.go:172] (0xc0000fd1e0) (0xc0003ae320) Stream removed, broadcasting: 1
I0223 12:47:02.273196       8 log.go:172] (0xc0000fd1e0) Go away received
I0223 12:47:02.273325       8 log.go:172] (0xc0000fd1e0) (0xc0003ae320) Stream removed, broadcasting: 1
I0223 12:47:02.273335       8 log.go:172] (0xc0000fd1e0) (0xc001ab10e0) Stream removed, broadcasting: 3
I0223 12:47:02.273341       8 log.go:172] (0xc0000fd1e0) (0xc0020d83c0) Stream removed, broadcasting: 5
Feb 23 12:47:02.273: INFO: Exec stderr: ""
Feb 23 12:47:02.273: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:02.273: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:02.371892       8 log.go:172] (0xc00171e2c0) (0xc0013f3ae0) Create stream
I0223 12:47:02.371956       8 log.go:172] (0xc00171e2c0) (0xc0013f3ae0) Stream added, broadcasting: 1
I0223 12:47:02.391841       8 log.go:172] (0xc00171e2c0) Reply frame received for 1
I0223 12:47:02.392081       8 log.go:172] (0xc00171e2c0) (0xc0003ae5a0) Create stream
I0223 12:47:02.392115       8 log.go:172] (0xc00171e2c0) (0xc0003ae5a0) Stream added, broadcasting: 3
I0223 12:47:02.393908       8 log.go:172] (0xc00171e2c0) Reply frame received for 3
I0223 12:47:02.393987       8 log.go:172] (0xc00171e2c0) (0xc0020d8460) Create stream
I0223 12:47:02.393995       8 log.go:172] (0xc00171e2c0) (0xc0020d8460) Stream added, broadcasting: 5
I0223 12:47:02.396593       8 log.go:172] (0xc00171e2c0) Reply frame received for 5
I0223 12:47:02.765730       8 log.go:172] (0xc00171e2c0) Data frame received for 3
I0223 12:47:02.765905       8 log.go:172] (0xc0003ae5a0) (3) Data frame handling
I0223 12:47:02.765951       8 log.go:172] (0xc0003ae5a0) (3) Data frame sent
I0223 12:47:02.955604       8 log.go:172] (0xc00171e2c0) Data frame received for 1
I0223 12:47:02.955686       8 log.go:172] (0xc00171e2c0) (0xc0003ae5a0) Stream removed, broadcasting: 3
I0223 12:47:02.955746       8 log.go:172] (0xc0013f3ae0) (1) Data frame handling
I0223 12:47:02.955767       8 log.go:172] (0xc0013f3ae0) (1) Data frame sent
I0223 12:47:02.955800       8 log.go:172] (0xc00171e2c0) (0xc0013f3ae0) Stream removed, broadcasting: 1
I0223 12:47:02.955826       8 log.go:172] (0xc00171e2c0) (0xc0020d8460) Stream removed, broadcasting: 5
I0223 12:47:02.955853       8 log.go:172] (0xc00171e2c0) Go away received
I0223 12:47:02.955929       8 log.go:172] (0xc00171e2c0) (0xc0013f3ae0) Stream removed, broadcasting: 1
I0223 12:47:02.955940       8 log.go:172] (0xc00171e2c0) (0xc0003ae5a0) Stream removed, broadcasting: 3
I0223 12:47:02.955951       8 log.go:172] (0xc00171e2c0) (0xc0020d8460) Stream removed, broadcasting: 5
Feb 23 12:47:02.955: INFO: Exec stderr: ""
Feb 23 12:47:02.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:02.956: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:03.022334       8 log.go:172] (0xc000f4a2c0) (0xc001ab1360) Create stream
I0223 12:47:03.022410       8 log.go:172] (0xc000f4a2c0) (0xc001ab1360) Stream added, broadcasting: 1
I0223 12:47:03.032604       8 log.go:172] (0xc000f4a2c0) Reply frame received for 1
I0223 12:47:03.032736       8 log.go:172] (0xc000f4a2c0) (0xc0020d8500) Create stream
I0223 12:47:03.032765       8 log.go:172] (0xc000f4a2c0) (0xc0020d8500) Stream added, broadcasting: 3
I0223 12:47:03.043687       8 log.go:172] (0xc000f4a2c0) Reply frame received for 3
I0223 12:47:03.043848       8 log.go:172] (0xc000f4a2c0) (0xc001ab1400) Create stream
I0223 12:47:03.043901       8 log.go:172] (0xc000f4a2c0) (0xc001ab1400) Stream added, broadcasting: 5
I0223 12:47:03.047205       8 log.go:172] (0xc000f4a2c0) Reply frame received for 5
I0223 12:47:03.191055       8 log.go:172] (0xc000f4a2c0) Data frame received for 3
I0223 12:47:03.191098       8 log.go:172] (0xc0020d8500) (3) Data frame handling
I0223 12:47:03.191115       8 log.go:172] (0xc0020d8500) (3) Data frame sent
I0223 12:47:03.298461       8 log.go:172] (0xc000f4a2c0) (0xc0020d8500) Stream removed, broadcasting: 3
I0223 12:47:03.298594       8 log.go:172] (0xc000f4a2c0) Data frame received for 1
I0223 12:47:03.298602       8 log.go:172] (0xc001ab1360) (1) Data frame handling
I0223 12:47:03.298612       8 log.go:172] (0xc001ab1360) (1) Data frame sent
I0223 12:47:03.298655       8 log.go:172] (0xc000f4a2c0) (0xc001ab1400) Stream removed, broadcasting: 5
I0223 12:47:03.298730       8 log.go:172] (0xc000f4a2c0) (0xc001ab1360) Stream removed, broadcasting: 1
I0223 12:47:03.298741       8 log.go:172] (0xc000f4a2c0) Go away received
I0223 12:47:03.299067       8 log.go:172] (0xc000f4a2c0) (0xc001ab1360) Stream removed, broadcasting: 1
I0223 12:47:03.299083       8 log.go:172] (0xc000f4a2c0) (0xc0020d8500) Stream removed, broadcasting: 3
I0223 12:47:03.299092       8 log.go:172] (0xc000f4a2c0) (0xc001ab1400) Stream removed, broadcasting: 5
Feb 23 12:47:03.299: INFO: Exec stderr: ""
Feb 23 12:47:03.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:03.299: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:03.363770       8 log.go:172] (0xc000f4a790) (0xc001ab1720) Create stream
I0223 12:47:03.363818       8 log.go:172] (0xc000f4a790) (0xc001ab1720) Stream added, broadcasting: 1
I0223 12:47:03.369400       8 log.go:172] (0xc000f4a790) Reply frame received for 1
I0223 12:47:03.369451       8 log.go:172] (0xc000f4a790) (0xc0020d85a0) Create stream
I0223 12:47:03.369473       8 log.go:172] (0xc000f4a790) (0xc0020d85a0) Stream added, broadcasting: 3
I0223 12:47:03.372183       8 log.go:172] (0xc000f4a790) Reply frame received for 3
I0223 12:47:03.372277       8 log.go:172] (0xc000f4a790) (0xc000d0c320) Create stream
I0223 12:47:03.372294       8 log.go:172] (0xc000f4a790) (0xc000d0c320) Stream added, broadcasting: 5
I0223 12:47:03.375333       8 log.go:172] (0xc000f4a790) Reply frame received for 5
I0223 12:47:03.473255       8 log.go:172] (0xc000f4a790) Data frame received for 3
I0223 12:47:03.473308       8 log.go:172] (0xc0020d85a0) (3) Data frame handling
I0223 12:47:03.473325       8 log.go:172] (0xc0020d85a0) (3) Data frame sent
I0223 12:47:03.592744       8 log.go:172] (0xc000f4a790) Data frame received for 1
I0223 12:47:03.592785       8 log.go:172] (0xc000f4a790) (0xc0020d85a0) Stream removed, broadcasting: 3
I0223 12:47:03.592830       8 log.go:172] (0xc001ab1720) (1) Data frame handling
I0223 12:47:03.592852       8 log.go:172] (0xc000f4a790) (0xc000d0c320) Stream removed, broadcasting: 5
I0223 12:47:03.592902       8 log.go:172] (0xc001ab1720) (1) Data frame sent
I0223 12:47:03.592942       8 log.go:172] (0xc000f4a790) (0xc001ab1720) Stream removed, broadcasting: 1
I0223 12:47:03.592968       8 log.go:172] (0xc000f4a790) Go away received
I0223 12:47:03.593199       8 log.go:172] (0xc000f4a790) (0xc001ab1720) Stream removed, broadcasting: 1
I0223 12:47:03.593238       8 log.go:172] (0xc000f4a790) (0xc0020d85a0) Stream removed, broadcasting: 3
I0223 12:47:03.593260       8 log.go:172] (0xc000f4a790) (0xc000d0c320) Stream removed, broadcasting: 5
Feb 23 12:47:03.593: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 23 12:47:03.593: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:03.593: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:03.675229       8 log.go:172] (0xc000ade2c0) (0xc0020d8820) Create stream
I0223 12:47:03.675276       8 log.go:172] (0xc000ade2c0) (0xc0020d8820) Stream added, broadcasting: 1
I0223 12:47:03.680131       8 log.go:172] (0xc000ade2c0) Reply frame received for 1
I0223 12:47:03.680175       8 log.go:172] (0xc000ade2c0) (0xc0013f3b80) Create stream
I0223 12:47:03.680184       8 log.go:172] (0xc000ade2c0) (0xc0013f3b80) Stream added, broadcasting: 3
I0223 12:47:03.681490       8 log.go:172] (0xc000ade2c0) Reply frame received for 3
I0223 12:47:03.681524       8 log.go:172] (0xc000ade2c0) (0xc000d0c460) Create stream
I0223 12:47:03.681531       8 log.go:172] (0xc000ade2c0) (0xc000d0c460) Stream added, broadcasting: 5
I0223 12:47:03.682775       8 log.go:172] (0xc000ade2c0) Reply frame received for 5
I0223 12:47:03.827330       8 log.go:172] (0xc000ade2c0) Data frame received for 3
I0223 12:47:03.827407       8 log.go:172] (0xc0013f3b80) (3) Data frame handling
I0223 12:47:03.827436       8 log.go:172] (0xc0013f3b80) (3) Data frame sent
I0223 12:47:04.050486       8 log.go:172] (0xc000ade2c0) Data frame received for 1
I0223 12:47:04.050633       8 log.go:172] (0xc000ade2c0) (0xc0013f3b80) Stream removed, broadcasting: 3
I0223 12:47:04.050679       8 log.go:172] (0xc0020d8820) (1) Data frame handling
I0223 12:47:04.050697       8 log.go:172] (0xc0020d8820) (1) Data frame sent
I0223 12:47:04.050818       8 log.go:172] (0xc000ade2c0) (0xc000d0c460) Stream removed, broadcasting: 5
I0223 12:47:04.050859       8 log.go:172] (0xc000ade2c0) (0xc0020d8820) Stream removed, broadcasting: 1
I0223 12:47:04.050937       8 log.go:172] (0xc000ade2c0) Go away received
I0223 12:47:04.051428       8 log.go:172] (0xc000ade2c0) (0xc0020d8820) Stream removed, broadcasting: 1
I0223 12:47:04.051449       8 log.go:172] (0xc000ade2c0) (0xc0013f3b80) Stream removed, broadcasting: 3
I0223 12:47:04.051470       8 log.go:172] (0xc000ade2c0) (0xc000d0c460) Stream removed, broadcasting: 5
Feb 23 12:47:04.051: INFO: Exec stderr: ""
Feb 23 12:47:04.051: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:04.051: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:04.143387       8 log.go:172] (0xc000c56420) (0xc000d0c960) Create stream
I0223 12:47:04.143493       8 log.go:172] (0xc000c56420) (0xc000d0c960) Stream added, broadcasting: 1
I0223 12:47:04.148276       8 log.go:172] (0xc000c56420) Reply frame received for 1
I0223 12:47:04.148318       8 log.go:172] (0xc000c56420) (0xc001ab17c0) Create stream
I0223 12:47:04.148331       8 log.go:172] (0xc000c56420) (0xc001ab17c0) Stream added, broadcasting: 3
I0223 12:47:04.149200       8 log.go:172] (0xc000c56420) Reply frame received for 3
I0223 12:47:04.149215       8 log.go:172] (0xc000c56420) (0xc001ab1860) Create stream
I0223 12:47:04.149222       8 log.go:172] (0xc000c56420) (0xc001ab1860) Stream added, broadcasting: 5
I0223 12:47:04.149873       8 log.go:172] (0xc000c56420) Reply frame received for 5
I0223 12:47:04.269870       8 log.go:172] (0xc000c56420) Data frame received for 3
I0223 12:47:04.269996       8 log.go:172] (0xc001ab17c0) (3) Data frame handling
I0223 12:47:04.270029       8 log.go:172] (0xc001ab17c0) (3) Data frame sent
I0223 12:47:04.405413       8 log.go:172] (0xc000c56420) Data frame received for 1
I0223 12:47:04.405465       8 log.go:172] (0xc000d0c960) (1) Data frame handling
I0223 12:47:04.405487       8 log.go:172] (0xc000d0c960) (1) Data frame sent
I0223 12:47:04.405881       8 log.go:172] (0xc000c56420) (0xc000d0c960) Stream removed, broadcasting: 1
I0223 12:47:04.406337       8 log.go:172] (0xc000c56420) (0xc001ab17c0) Stream removed, broadcasting: 3
I0223 12:47:04.406735       8 log.go:172] (0xc000c56420) (0xc001ab1860) Stream removed, broadcasting: 5
I0223 12:47:04.406763       8 log.go:172] (0xc000c56420) (0xc000d0c960) Stream removed, broadcasting: 1
I0223 12:47:04.406775       8 log.go:172] (0xc000c56420) (0xc001ab17c0) Stream removed, broadcasting: 3
I0223 12:47:04.406787       8 log.go:172] (0xc000c56420) (0xc001ab1860) Stream removed, broadcasting: 5
Feb 23 12:47:04.407: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 23 12:47:04.407: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:04.407: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:04.495693       8 log.go:172] (0xc0000fd8c0) (0xc0003aedc0) Create stream
I0223 12:47:04.495834       8 log.go:172] (0xc0000fd8c0) (0xc0003aedc0) Stream added, broadcasting: 1
I0223 12:47:04.510926       8 log.go:172] (0xc0000fd8c0) Reply frame received for 1
I0223 12:47:04.511148       8 log.go:172] (0xc0000fd8c0) (0xc0013f3c20) Create stream
I0223 12:47:04.511182       8 log.go:172] (0xc0000fd8c0) (0xc0013f3c20) Stream added, broadcasting: 3
I0223 12:47:04.513687       8 log.go:172] (0xc0000fd8c0) Reply frame received for 3
I0223 12:47:04.513712       8 log.go:172] (0xc0000fd8c0) (0xc0020d88c0) Create stream
I0223 12:47:04.513721       8 log.go:172] (0xc0000fd8c0) (0xc0020d88c0) Stream added, broadcasting: 5
I0223 12:47:04.514980       8 log.go:172] (0xc0000fd8c0) Reply frame received for 5
I0223 12:47:04.688080       8 log.go:172] (0xc0000fd8c0) Data frame received for 3
I0223 12:47:04.688141       8 log.go:172] (0xc0013f3c20) (3) Data frame handling
I0223 12:47:04.688173       8 log.go:172] (0xc0013f3c20) (3) Data frame sent
I0223 12:47:04.816417       8 log.go:172] (0xc0000fd8c0) (0xc0013f3c20) Stream removed, broadcasting: 3
I0223 12:47:04.816654       8 log.go:172] (0xc0000fd8c0) Data frame received for 1
I0223 12:47:04.816673       8 log.go:172] (0xc0003aedc0) (1) Data frame handling
I0223 12:47:04.816694       8 log.go:172] (0xc0003aedc0) (1) Data frame sent
I0223 12:47:04.816752       8 log.go:172] (0xc0000fd8c0) (0xc0003aedc0) Stream removed, broadcasting: 1
I0223 12:47:04.817118       8 log.go:172] (0xc0000fd8c0) (0xc0020d88c0) Stream removed, broadcasting: 5
I0223 12:47:04.817240       8 log.go:172] (0xc0000fd8c0) Go away received
I0223 12:47:04.817363       8 log.go:172] (0xc0000fd8c0) (0xc0003aedc0) Stream removed, broadcasting: 1
I0223 12:47:04.817481       8 log.go:172] (0xc0000fd8c0) (0xc0013f3c20) Stream removed, broadcasting: 3
I0223 12:47:04.817563       8 log.go:172] (0xc0000fd8c0) (0xc0020d88c0) Stream removed, broadcasting: 5
Feb 23 12:47:04.817: INFO: Exec stderr: ""
Feb 23 12:47:04.817: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:04.817: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:04.900177       8 log.go:172] (0xc000f4ac60) (0xc001ab1ae0) Create stream
I0223 12:47:04.900283       8 log.go:172] (0xc000f4ac60) (0xc001ab1ae0) Stream added, broadcasting: 1
I0223 12:47:04.906026       8 log.go:172] (0xc000f4ac60) Reply frame received for 1
I0223 12:47:04.906064       8 log.go:172] (0xc000f4ac60) (0xc000d0caa0) Create stream
I0223 12:47:04.906075       8 log.go:172] (0xc000f4ac60) (0xc000d0caa0) Stream added, broadcasting: 3
I0223 12:47:04.907630       8 log.go:172] (0xc000f4ac60) Reply frame received for 3
I0223 12:47:04.907676       8 log.go:172] (0xc000f4ac60) (0xc0013f3d60) Create stream
I0223 12:47:04.907689       8 log.go:172] (0xc000f4ac60) (0xc0013f3d60) Stream added, broadcasting: 5
I0223 12:47:04.908568       8 log.go:172] (0xc000f4ac60) Reply frame received for 5
I0223 12:47:05.022299       8 log.go:172] (0xc000f4ac60) Data frame received for 3
I0223 12:47:05.022359       8 log.go:172] (0xc000d0caa0) (3) Data frame handling
I0223 12:47:05.022381       8 log.go:172] (0xc000d0caa0) (3) Data frame sent
I0223 12:47:05.143635       8 log.go:172] (0xc000f4ac60) (0xc000d0caa0) Stream removed, broadcasting: 3
I0223 12:47:05.143823       8 log.go:172] (0xc000f4ac60) Data frame received for 1
I0223 12:47:05.143938       8 log.go:172] (0xc000f4ac60) (0xc0013f3d60) Stream removed, broadcasting: 5
I0223 12:47:05.143998       8 log.go:172] (0xc001ab1ae0) (1) Data frame handling
I0223 12:47:05.144094       8 log.go:172] (0xc001ab1ae0) (1) Data frame sent
I0223 12:47:05.144131       8 log.go:172] (0xc000f4ac60) (0xc001ab1ae0) Stream removed, broadcasting: 1
I0223 12:47:05.144147       8 log.go:172] (0xc000f4ac60) Go away received
I0223 12:47:05.144775       8 log.go:172] (0xc000f4ac60) (0xc001ab1ae0) Stream removed, broadcasting: 1
I0223 12:47:05.144822       8 log.go:172] (0xc000f4ac60) (0xc000d0caa0) Stream removed, broadcasting: 3
I0223 12:47:05.144832       8 log.go:172] (0xc000f4ac60) (0xc0013f3d60) Stream removed, broadcasting: 5
Feb 23 12:47:05.145: INFO: Exec stderr: ""
Feb 23 12:47:05.145: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:05.145: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:05.230163       8 log.go:172] (0xc000f4b130) (0xc001ab1d60) Create stream
I0223 12:47:05.230221       8 log.go:172] (0xc000f4b130) (0xc001ab1d60) Stream added, broadcasting: 1
I0223 12:47:05.235130       8 log.go:172] (0xc000f4b130) Reply frame received for 1
I0223 12:47:05.235198       8 log.go:172] (0xc000f4b130) (0xc0013f3e00) Create stream
I0223 12:47:05.235208       8 log.go:172] (0xc000f4b130) (0xc0013f3e00) Stream added, broadcasting: 3
I0223 12:47:05.236127       8 log.go:172] (0xc000f4b130) Reply frame received for 3
I0223 12:47:05.236152       8 log.go:172] (0xc000f4b130) (0xc001ab1e00) Create stream
I0223 12:47:05.236172       8 log.go:172] (0xc000f4b130) (0xc001ab1e00) Stream added, broadcasting: 5
I0223 12:47:05.237058       8 log.go:172] (0xc000f4b130) Reply frame received for 5
I0223 12:47:05.324440       8 log.go:172] (0xc000f4b130) Data frame received for 3
I0223 12:47:05.324487       8 log.go:172] (0xc0013f3e00) (3) Data frame handling
I0223 12:47:05.324515       8 log.go:172] (0xc0013f3e00) (3) Data frame sent
I0223 12:47:05.431902       8 log.go:172] (0xc000f4b130) Data frame received for 1
I0223 12:47:05.431993       8 log.go:172] (0xc000f4b130) (0xc001ab1e00) Stream removed, broadcasting: 5
I0223 12:47:05.432030       8 log.go:172] (0xc001ab1d60) (1) Data frame handling
I0223 12:47:05.432053       8 log.go:172] (0xc001ab1d60) (1) Data frame sent
I0223 12:47:05.432093       8 log.go:172] (0xc000f4b130) (0xc0013f3e00) Stream removed, broadcasting: 3
I0223 12:47:05.432140       8 log.go:172] (0xc000f4b130) (0xc001ab1d60) Stream removed, broadcasting: 1
I0223 12:47:05.432165       8 log.go:172] (0xc000f4b130) Go away received
I0223 12:47:05.432845       8 log.go:172] (0xc000f4b130) (0xc001ab1d60) Stream removed, broadcasting: 1
I0223 12:47:05.432875       8 log.go:172] (0xc000f4b130) (0xc0013f3e00) Stream removed, broadcasting: 3
I0223 12:47:05.432893       8 log.go:172] (0xc000f4b130) (0xc001ab1e00) Stream removed, broadcasting: 5
Feb 23 12:47:05.432: INFO: Exec stderr: ""
Feb 23 12:47:05.433: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-sz5vs PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 23 12:47:05.433: INFO: >>> kubeConfig: /root/.kube/config
I0223 12:47:05.498225       8 log.go:172] (0xc00171e790) (0xc00033c6e0) Create stream
I0223 12:47:05.498286       8 log.go:172] (0xc00171e790) (0xc00033c6e0) Stream added, broadcasting: 1
I0223 12:47:05.501661       8 log.go:172] (0xc00171e790) Reply frame received for 1
I0223 12:47:05.501687       8 log.go:172] (0xc00171e790) (0xc00033cbe0) Create stream
I0223 12:47:05.501695       8 log.go:172] (0xc00171e790) (0xc00033cbe0) Stream added, broadcasting: 3
I0223 12:47:05.502779       8 log.go:172] (0xc00171e790) Reply frame received for 3
I0223 12:47:05.502819       8 log.go:172] (0xc00171e790) (0xc0020d8960) Create stream
I0223 12:47:05.502837       8 log.go:172] (0xc00171e790) (0xc0020d8960) Stream added, broadcasting: 5
I0223 12:47:05.503754       8 log.go:172] (0xc00171e790) Reply frame received for 5
I0223 12:47:05.643224       8 log.go:172] (0xc00171e790) Data frame received for 3
I0223 12:47:05.643262       8 log.go:172] (0xc00033cbe0) (3) Data frame handling
I0223 12:47:05.643289       8 log.go:172] (0xc00033cbe0) (3) Data frame sent
I0223 12:47:05.764952       8 log.go:172] (0xc00171e790) Data frame received for 1
I0223 12:47:05.765008       8 log.go:172] (0xc00033c6e0) (1) Data frame handling
I0223 12:47:05.765071       8 log.go:172] (0xc00033c6e0) (1) Data frame sent
I0223 12:47:05.765914       8 log.go:172] (0xc00171e790) (0xc00033c6e0) Stream removed, broadcasting: 1
I0223 12:47:05.766943       8 log.go:172] (0xc00171e790) (0xc00033cbe0) Stream removed, broadcasting: 3
I0223 12:47:05.767584       8 log.go:172] (0xc00171e790) (0xc0020d8960) Stream removed, broadcasting: 5
I0223 12:47:05.767646       8 log.go:172] (0xc00171e790) (0xc00033c6e0) Stream removed, broadcasting: 1
I0223 12:47:05.767698       8 log.go:172] (0xc00171e790) (0xc00033cbe0) Stream removed, broadcasting: 3
I0223 12:47:05.767746       8 log.go:172] (0xc00171e790) (0xc0020d8960) Stream removed, broadcasting: 5
Feb 23 12:47:05.768: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:47:05.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-sz5vs" for this suite.
Feb 23 12:47:53.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:47:54.071: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-sz5vs, resource: bindings, ignored listing per whitelist
Feb 23 12:47:54.097: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-sz5vs deletion completed in 48.310377397s

• [SLOW TEST:76.855 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:47:54.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-h2nxl
Feb 23 12:48:04.680: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-h2nxl
STEP: checking the pod's current state and verifying that restartCount is present
Feb 23 12:48:04.690: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:52:04.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-h2nxl" for this suite.
Feb 23 12:52:11.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:52:11.331: INFO: namespace: e2e-tests-container-probe-h2nxl, resource: bindings, ignored listing per whitelist
Feb 23 12:52:11.340: INFO: namespace e2e-tests-container-probe-h2nxl deletion completed in 6.328422772s

• [SLOW TEST:257.242 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:52:11.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:52:11.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-6zpd5" to be "success or failure"
Feb 23 12:52:11.764: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.11028ms
Feb 23 12:52:13.951: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19848896s
Feb 23 12:52:15.970: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217368351s
Feb 23 12:52:18.811: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058054987s
Feb 23 12:52:20.872: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.119212971s
Feb 23 12:52:22.885: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.132343985s
STEP: Saw pod success
Feb 23 12:52:22.885: INFO: Pod "downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:52:22.890: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:52:22.981: INFO: Waiting for pod downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:52:23.447: INFO: Pod downwardapi-volume-4eff1f3c-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:52:23.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6zpd5" for this suite.
Feb 23 12:52:30.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:52:30.232: INFO: namespace: e2e-tests-downward-api-6zpd5, resource: bindings, ignored listing per whitelist
Feb 23 12:52:30.399: INFO: namespace e2e-tests-downward-api-6zpd5 deletion completed in 6.93777663s

• [SLOW TEST:19.058 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:52:30.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-5a559d4f-563b-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:52:30.873: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-bngfd" to be "success or failure"
Feb 23 12:52:30.998: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 125.022945ms
Feb 23 12:52:33.016: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143823243s
Feb 23 12:52:35.038: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165019465s
Feb 23 12:52:37.092: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219616573s
Feb 23 12:52:39.134: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261373921s
Feb 23 12:52:41.540: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.667593935s
STEP: Saw pod success
Feb 23 12:52:41.541: INFO: Pod "pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:52:41.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 23 12:52:41.974: INFO: Waiting for pod pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:52:42.050: INFO: Pod pod-projected-secrets-5a6b8cdb-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:52:42.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bngfd" for this suite.
Feb 23 12:52:48.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:52:48.258: INFO: namespace: e2e-tests-projected-bngfd, resource: bindings, ignored listing per whitelist
Feb 23 12:52:48.285: INFO: namespace e2e-tests-projected-bngfd deletion completed in 6.212618922s

• [SLOW TEST:17.885 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:52:48.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 23 12:53:08.852: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:08.900: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:10.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:10.921: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:12.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:12.944: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:14.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:14.947: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:16.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:16.914: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:18.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:19.743: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:20.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:20.930: INFO: Pod pod-with-prestop-http-hook still exists
Feb 23 12:53:22.900: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 23 12:53:22.923: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:53:22.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tv7w4" for this suite.
Feb 23 12:53:46.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:53:47.177: INFO: namespace: e2e-tests-container-lifecycle-hook-tv7w4, resource: bindings, ignored listing per whitelist
Feb 23 12:53:47.194: INFO: namespace e2e-tests-container-lifecycle-hook-tv7w4 deletion completed in 24.238152599s

• [SLOW TEST:58.909 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:53:47.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 23 12:53:47.481: INFO: Waiting up to 5m0s for pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-var-expansion-pxqqx" to be "success or failure"
Feb 23 12:53:47.532: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 50.969571ms
Feb 23 12:53:49.726: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245031644s
Feb 23 12:53:51.738: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25667398s
Feb 23 12:53:54.445: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.963914042s
Feb 23 12:53:56.802: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.321206351s
Feb 23 12:53:58.823: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.342198733s
STEP: Saw pod success
Feb 23 12:53:58.823: INFO: Pod "var-expansion-88183ab9-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:53:58.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-88183ab9-563b-11ea-8363-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 23 12:53:59.104: INFO: Waiting for pod var-expansion-88183ab9-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:53:59.136: INFO: Pod var-expansion-88183ab9-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:53:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-pxqqx" for this suite.
Feb 23 12:54:06.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:54:06.078: INFO: namespace: e2e-tests-var-expansion-pxqqx, resource: bindings, ignored listing per whitelist
Feb 23 12:54:06.214: INFO: namespace e2e-tests-var-expansion-pxqqx deletion completed in 7.074073132s

• [SLOW TEST:19.019 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:54:06.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-pr9x
STEP: Creating a pod to test atomic-volume-subpath
Feb 23 12:54:06.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pr9x" in namespace "e2e-tests-subpath-k6cnf" to be "success or failure"
Feb 23 12:54:06.420: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 23.326079ms
Feb 23 12:54:08.457: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060534194s
Feb 23 12:54:10.497: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100527648s
Feb 23 12:54:12.708: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311270203s
Feb 23 12:54:14.732: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335566083s
Feb 23 12:54:17.358: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.961861126s
Feb 23 12:54:19.372: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.975328465s
Feb 23 12:54:21.654: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.25700096s
Feb 23 12:54:23.661: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.264576482s
Feb 23 12:54:27.338: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 20.941034933s
Feb 23 12:54:29.359: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 22.961915614s
Feb 23 12:54:31.379: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 24.982068634s
Feb 23 12:54:33.393: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 26.996418109s
Feb 23 12:54:35.412: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 29.015535766s
Feb 23 12:54:37.425: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 31.028303144s
Feb 23 12:54:39.444: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 33.047487381s
Feb 23 12:54:41.461: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 35.06463625s
Feb 23 12:54:43.515: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Running", Reason="", readiness=false. Elapsed: 37.118516406s
Feb 23 12:54:45.529: INFO: Pod "pod-subpath-test-configmap-pr9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.131968547s
STEP: Saw pod success
Feb 23 12:54:45.529: INFO: Pod "pod-subpath-test-configmap-pr9x" satisfied condition "success or failure"
Feb 23 12:54:45.534: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pr9x container test-container-subpath-configmap-pr9x: 
STEP: delete the pod
Feb 23 12:54:45.646: INFO: Waiting for pod pod-subpath-test-configmap-pr9x to disappear
Feb 23 12:54:51.554: INFO: Pod pod-subpath-test-configmap-pr9x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pr9x
Feb 23 12:54:51.554: INFO: Deleting pod "pod-subpath-test-configmap-pr9x" in namespace "e2e-tests-subpath-k6cnf"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:54:52.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-k6cnf" for this suite.
Feb 23 12:55:00.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:55:00.770: INFO: namespace: e2e-tests-subpath-k6cnf, resource: bindings, ignored listing per whitelist
Feb 23 12:55:00.862: INFO: namespace e2e-tests-subpath-k6cnf deletion completed in 8.237155189s

• [SLOW TEST:54.648 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:55:00.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008
Feb 23 12:55:01.128: INFO: Pod name my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008: Found 0 pods out of 1
Feb 23 12:55:06.745: INFO: Pod name my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008: Found 1 pods out of 1
Feb 23 12:55:06.745: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008" are running
Feb 23 12:55:18.795: INFO: Pod "my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008-hqvvd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:55:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:55:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:55:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 12:55:01 +0000 UTC Reason: Message:}])
Feb 23 12:55:18.795: INFO: Trying to dial the pod
Feb 23 12:55:23.878: INFO: Controller my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008: Got expected result from replica 1 [my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008-hqvvd]: "my-hostname-basic-b3fd56c7-563b-11ea-8363-0242ac110008-hqvvd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:55:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-jzlrx" for this suite.
Feb 23 12:55:30.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:55:30.049: INFO: namespace: e2e-tests-replication-controller-jzlrx, resource: bindings, ignored listing per whitelist
Feb 23 12:55:30.157: INFO: namespace e2e-tests-replication-controller-jzlrx deletion completed in 6.257298349s

• [SLOW TEST:29.294 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:55:30.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:55:30.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-vt5bs" to be "success or failure"
Feb 23 12:55:30.353: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.175698ms
Feb 23 12:55:32.539: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211770169s
Feb 23 12:55:34.564: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237021988s
Feb 23 12:55:36.687: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360104561s
Feb 23 12:55:38.725: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398509732s
Feb 23 12:55:41.020: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.693093236s
Feb 23 12:55:43.031: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704393068s
Feb 23 12:55:46.900: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.573373621s
STEP: Saw pod success
Feb 23 12:55:46.900: INFO: Pod "downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:55:47.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:55:47.667: INFO: Waiting for pod downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:55:47.683: INFO: Pod downwardapi-volume-c56490f4-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:55:47.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vt5bs" for this suite.
Feb 23 12:55:53.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:55:54.105: INFO: namespace: e2e-tests-projected-vt5bs, resource: bindings, ignored listing per whitelist
Feb 23 12:55:54.173: INFO: namespace e2e-tests-projected-vt5bs deletion completed in 6.480611291s

• [SLOW TEST:24.016 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:55:54.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-d4f9fd68-563b-11ea-8363-0242ac110008
STEP: Creating secret with name secret-projected-all-test-volume-d4f9fceb-563b-11ea-8363-0242ac110008
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 23 12:55:56.756: INFO: Waiting up to 5m0s for pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-jtp6r" to be "success or failure"
Feb 23 12:55:56.766: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.939659ms
Feb 23 12:55:58.782: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025907465s
Feb 23 12:56:00.807: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050283249s
Feb 23 12:56:03.069: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313132645s
Feb 23 12:56:05.097: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340328785s
Feb 23 12:56:07.269: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.512547465s
STEP: Saw pod success
Feb 23 12:56:07.269: INFO: Pod "projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:56:07.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008 container projected-all-volume-test: 
STEP: delete the pod
Feb 23 12:56:07.440: INFO: Waiting for pod projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:56:07.451: INFO: Pod projected-volume-d4f9fc6c-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:56:07.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jtp6r" for this suite.
Feb 23 12:56:15.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:56:15.890: INFO: namespace: e2e-tests-projected-jtp6r, resource: bindings, ignored listing per whitelist
Feb 23 12:56:15.911: INFO: namespace e2e-tests-projected-jtp6r deletion completed in 8.452061618s

• [SLOW TEST:21.738 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:56:15.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0223 12:56:30.293302       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 23 12:56:30.293: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:56:30.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dtl84" for this suite.
Feb 23 12:57:02.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:57:02.667: INFO: namespace: e2e-tests-gc-dtl84, resource: bindings, ignored listing per whitelist
Feb 23 12:57:02.789: INFO: namespace e2e-tests-gc-dtl84 deletion completed in 32.484014168s

• [SLOW TEST:46.877 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:57:02.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 23 12:57:03.136: INFO: Waiting up to 5m0s for pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-qvk67" to be "success or failure"
Feb 23 12:57:03.346: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 210.182143ms
Feb 23 12:57:05.371: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234643786s
Feb 23 12:57:07.397: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260836735s
Feb 23 12:57:09.409: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273239009s
Feb 23 12:57:11.839: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.703119352s
Feb 23 12:57:13.922: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.786080703s
Feb 23 12:57:15.939: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.80327927s
STEP: Saw pod success
Feb 23 12:57:15.939: INFO: Pod "pod-fcb6d3af-563b-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:57:15.945: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fcb6d3af-563b-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:57:16.085: INFO: Waiting for pod pod-fcb6d3af-563b-11ea-8363-0242ac110008 to disappear
Feb 23 12:57:16.101: INFO: Pod pod-fcb6d3af-563b-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:57:16.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qvk67" for this suite.
Feb 23 12:57:23.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:57:23.293: INFO: namespace: e2e-tests-emptydir-qvk67, resource: bindings, ignored listing per whitelist
Feb 23 12:57:23.381: INFO: namespace e2e-tests-emptydir-qvk67 deletion completed in 7.269434398s

• [SLOW TEST:20.592 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:57:23.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb 23 12:57:23.699: INFO: Waiting up to 5m0s for pod "client-containers-08f89323-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-containers-gs8cp" to be "success or failure"
Feb 23 12:57:23.844: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 145.363045ms
Feb 23 12:57:25.913: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214384223s
Feb 23 12:57:27.936: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237719247s
Feb 23 12:57:29.957: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257897886s
Feb 23 12:57:32.534: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.835511416s
Feb 23 12:57:34.559: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86073916s
Feb 23 12:57:36.605: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.906281787s
Feb 23 12:57:38.774: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 15.074878346s
Feb 23 12:57:40.793: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.094600788s
STEP: Saw pod success
Feb 23 12:57:40.793: INFO: Pod "client-containers-08f89323-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:57:40.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-08f89323-563c-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 12:57:41.492: INFO: Waiting for pod client-containers-08f89323-563c-11ea-8363-0242ac110008 to disappear
Feb 23 12:57:41.500: INFO: Pod client-containers-08f89323-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:57:41.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gs8cp" for this suite.
Feb 23 12:57:47.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:57:47.989: INFO: namespace: e2e-tests-containers-gs8cp, resource: bindings, ignored listing per whitelist
Feb 23 12:57:48.014: INFO: namespace e2e-tests-containers-gs8cp deletion completed in 6.498498495s

• [SLOW TEST:24.633 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:57:48.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-17ab7f4c-563c-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 12:57:48.509: INFO: Waiting up to 5m0s for pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-z2fsg" to be "success or failure"
Feb 23 12:57:48.539: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.746773ms
Feb 23 12:57:50.560: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051182199s
Feb 23 12:57:52.601: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092044192s
Feb 23 12:57:54.626: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116796306s
Feb 23 12:57:57.056: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547068376s
Feb 23 12:57:59.073: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563553596s
Feb 23 12:58:04.025: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.515947063s
Feb 23 12:58:06.042: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.53245272s
STEP: Saw pod success
Feb 23 12:58:06.042: INFO: Pod "pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:58:06.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 23 12:58:06.827: INFO: Waiting for pod pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008 to disappear
Feb 23 12:58:06.855: INFO: Pod pod-configmaps-17ad8ad8-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:58:06.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z2fsg" for this suite.
Feb 23 12:58:15.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:58:15.902: INFO: namespace: e2e-tests-configmap-z2fsg, resource: bindings, ignored listing per whitelist
Feb 23 12:58:15.934: INFO: namespace e2e-tests-configmap-z2fsg deletion completed in 9.057288844s

• [SLOW TEST:27.920 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:58:15.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 12:58:16.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 23 12:58:16.612: INFO: stderr: ""
Feb 23 12:58:16.612: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 23 12:58:16.621: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:58:16.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6sjp7" for this suite.
Feb 23 12:58:22.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:58:22.809: INFO: namespace: e2e-tests-kubectl-6sjp7, resource: bindings, ignored listing per whitelist
Feb 23 12:58:22.955: INFO: namespace e2e-tests-kubectl-6sjp7 deletion completed in 6.25705134s

S [SKIPPING] [7.020 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 23 12:58:16.621: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:58:22.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 23 12:58:23.188: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649895,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 23 12:58:23.188: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649896,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 23 12:58:23.188: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649897,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 23 12:58:33.323: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649911,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 23 12:58:33.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649912,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 23 12:58:33.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2p44d,SelfLink:/api/v1/namespaces/e2e-tests-watch-2p44d/configmaps/e2e-watch-test-label-changed,UID:2c6d6cfb-563c-11ea-a994-fa163e34d433,ResourceVersion:22649913,Generation:0,CreationTimestamp:2020-02-23 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:58:33.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2p44d" for this suite.
Feb 23 12:58:41.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:58:41.453: INFO: namespace: e2e-tests-watch-2p44d, resource: bindings, ignored listing per whitelist
Feb 23 12:58:41.646: INFO: namespace e2e-tests-watch-2p44d deletion completed in 8.310933045s

• [SLOW TEST:18.691 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:58:41.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 12:58:42.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-downward-api-b77n7" to be "success or failure"
Feb 23 12:58:42.238: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 210.04789ms
Feb 23 12:58:44.253: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224296565s
Feb 23 12:58:47.444: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.415451227s
Feb 23 12:58:49.496: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.467977091s
Feb 23 12:58:52.181: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1524031s
Feb 23 12:58:54.894: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.865873864s
Feb 23 12:58:57.250: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.221308151s
Feb 23 12:58:59.262: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.233490037s
Feb 23 12:59:01.299: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.270485365s
Feb 23 12:59:03.317: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.288532459s
STEP: Saw pod success
Feb 23 12:59:03.317: INFO: Pod "downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:59:03.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 12:59:03.559: INFO: Waiting for pod downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008 to disappear
Feb 23 12:59:03.583: INFO: Pod downwardapi-volume-37a990d9-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:59:03.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b77n7" for this suite.
Feb 23 12:59:11.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:59:11.863: INFO: namespace: e2e-tests-downward-api-b77n7, resource: bindings, ignored listing per whitelist
Feb 23 12:59:11.948: INFO: namespace e2e-tests-downward-api-b77n7 deletion completed in 8.34712177s

• [SLOW TEST:30.301 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:59:11.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 23 12:59:12.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qhdpv'
Feb 23 12:59:15.240: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 23 12:59:15.240: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 23 12:59:15.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-qhdpv'
Feb 23 12:59:15.671: INFO: stderr: ""
Feb 23 12:59:15.671: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:59:15.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qhdpv" for this suite.
Feb 23 12:59:39.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 12:59:40.175: INFO: namespace: e2e-tests-kubectl-qhdpv, resource: bindings, ignored listing per whitelist
Feb 23 12:59:40.189: INFO: namespace e2e-tests-kubectl-qhdpv deletion completed in 24.482001776s

• [SLOW TEST:28.240 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 12:59:40.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5a785e30-563c-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 12:59:40.662: INFO: Waiting up to 5m0s for pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-d9zqw" to be "success or failure"
Feb 23 12:59:40.668: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673492ms
Feb 23 12:59:42.708: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045392778s
Feb 23 12:59:44.722: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059986529s
Feb 23 12:59:46.847: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184555428s
Feb 23 12:59:48.876: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213701461s
Feb 23 12:59:52.185: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.522657308s
Feb 23 12:59:54.203: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.540436587s
Feb 23 12:59:56.219: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.556769395s
STEP: Saw pod success
Feb 23 12:59:56.219: INFO: Pod "pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 12:59:56.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 12:59:57.229: INFO: Waiting for pod pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008 to disappear
Feb 23 12:59:57.268: INFO: Pod pod-secrets-5a7a1e78-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 12:59:57.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d9zqw" for this suite.
Feb 23 13:00:05.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:00:05.883: INFO: namespace: e2e-tests-secrets-d9zqw, resource: bindings, ignored listing per whitelist
Feb 23 13:00:06.148: INFO: namespace e2e-tests-secrets-d9zqw deletion completed in 8.647934018s

• [SLOW TEST:25.959 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:00:06.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-69e7970b-563c-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 13:00:06.361: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-jfsqf" to be "success or failure"
Feb 23 13:00:06.380: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.171906ms
Feb 23 13:00:08.574: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212533359s
Feb 23 13:00:10.639: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278111291s
Feb 23 13:00:13.399: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.03749011s
Feb 23 13:00:15.499: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.138340057s
Feb 23 13:00:17.512: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.151052775s
Feb 23 13:00:19.795: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.434344998s
Feb 23 13:00:22.315: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.954282827s
STEP: Saw pod success
Feb 23 13:00:22.316: INFO: Pod "pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:00:22.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 23 13:00:23.035: INFO: Waiting for pod pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008 to disappear
Feb 23 13:00:23.048: INFO: Pod pod-projected-configmaps-69e8383a-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:00:23.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jfsqf" for this suite.
Feb 23 13:00:29.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:00:29.217: INFO: namespace: e2e-tests-projected-jfsqf, resource: bindings, ignored listing per whitelist
Feb 23 13:00:29.320: INFO: namespace e2e-tests-projected-jfsqf deletion completed in 6.264282759s

• [SLOW TEST:23.170 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:00:29.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 23 13:00:29.524: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f7xdz,SelfLink:/api/v1/namespaces/e2e-tests-watch-f7xdz/configmaps/e2e-watch-test-resource-version,UID:77b0f25a-563c-11ea-a994-fa163e34d433,ResourceVersion:22650148,Generation:0,CreationTimestamp:2020-02-23 13:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 23 13:00:29.524: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f7xdz,SelfLink:/api/v1/namespaces/e2e-tests-watch-f7xdz/configmaps/e2e-watch-test-resource-version,UID:77b0f25a-563c-11ea-a994-fa163e34d433,ResourceVersion:22650149,Generation:0,CreationTimestamp:2020-02-23 13:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:00:29.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-f7xdz" for this suite.
Feb 23 13:00:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:00:35.591: INFO: namespace: e2e-tests-watch-f7xdz, resource: bindings, ignored listing per whitelist
Feb 23 13:00:35.809: INFO: namespace e2e-tests-watch-f7xdz deletion completed in 6.279201306s

• [SLOW TEST:6.489 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:00:35.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-ms9tm
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 23 13:00:36.111: INFO: Found 0 stateful pods, waiting for 3
Feb 23 13:00:46.127: INFO: Found 1 stateful pods, waiting for 3
Feb 23 13:00:56.139: INFO: Found 2 stateful pods, waiting for 3
Feb 23 13:01:06.142: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:01:06.142: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:01:06.142: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 13:01:16.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:01:16.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:01:16.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 23 13:01:16.172: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 23 13:01:26.279: INFO: Updating stateful set ss2
Feb 23 13:01:26.294: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 13:01:36.316: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 23 13:01:46.659: INFO: Found 2 stateful pods, waiting for 3
Feb 23 13:01:56.703: INFO: Found 2 stateful pods, waiting for 3
Feb 23 13:02:06.800: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:06.800: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:06.800: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 13:02:16.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:16.677: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:16.677: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 13:02:26.671: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:26.671: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:02:26.671: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 23 13:02:26.711: INFO: Updating stateful set ss2
Feb 23 13:02:26.880: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 13:02:36.980: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 13:02:46.990: INFO: Updating stateful set ss2
Feb 23 13:02:47.008: INFO: Waiting for StatefulSet e2e-tests-statefulset-ms9tm/ss2 to complete update
Feb 23 13:02:47.008: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 13:02:57.032: INFO: Waiting for StatefulSet e2e-tests-statefulset-ms9tm/ss2 to complete update
Feb 23 13:02:57.032: INFO: Waiting for Pod e2e-tests-statefulset-ms9tm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 23 13:03:07.053: INFO: Waiting for StatefulSet e2e-tests-statefulset-ms9tm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 23 13:03:17.033: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ms9tm
Feb 23 13:03:17.038: INFO: Scaling statefulset ss2 to 0
Feb 23 13:03:37.088: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 13:03:37.093: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:03:37.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-ms9tm" for this suite.
Feb 23 13:03:45.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:03:45.340: INFO: namespace: e2e-tests-statefulset-ms9tm, resource: bindings, ignored listing per whitelist
Feb 23 13:03:45.405: INFO: namespace e2e-tests-statefulset-ms9tm deletion completed in 8.281780749s

• [SLOW TEST:189.595 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:03:45.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 13:03:45.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-5c7hj" to be "success or failure"
Feb 23 13:03:45.736: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 58.001563ms
Feb 23 13:03:47.789: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110325049s
Feb 23 13:03:49.823: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145016864s
Feb 23 13:03:51.836: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157940523s
Feb 23 13:03:53.908: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229392759s
Feb 23 13:03:55.977: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298393763s
Feb 23 13:03:57.994: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.315711972s
STEP: Saw pod success
Feb 23 13:03:57.994: INFO: Pod "downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:03:58.003: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 13:03:58.386: INFO: Waiting for pod downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008 to disappear
Feb 23 13:03:58.402: INFO: Pod downwardapi-volume-ec95b06c-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:03:58.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5c7hj" for this suite.
Feb 23 13:04:04.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:04:04.630: INFO: namespace: e2e-tests-projected-5c7hj, resource: bindings, ignored listing per whitelist
Feb 23 13:04:04.657: INFO: namespace e2e-tests-projected-5c7hj deletion completed in 6.245783307s

• [SLOW TEST:19.251 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:04:04.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f815bcd4-563c-11ea-8363-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 23 13:04:04.883: INFO: Waiting up to 5m0s for pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008" in namespace "e2e-tests-secrets-4kpq2" to be "success or failure"
Feb 23 13:04:04.889: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.749481ms
Feb 23 13:04:06.911: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027158324s
Feb 23 13:04:08.944: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060408344s
Feb 23 13:04:11.036: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152369683s
Feb 23 13:04:13.076: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192394408s
Feb 23 13:04:15.104: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220995854s
STEP: Saw pod success
Feb 23 13:04:15.105: INFO: Pod "pod-secrets-f816f333-563c-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:04:15.115: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f816f333-563c-11ea-8363-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 23 13:04:15.261: INFO: Waiting for pod pod-secrets-f816f333-563c-11ea-8363-0242ac110008 to disappear
Feb 23 13:04:15.284: INFO: Pod pod-secrets-f816f333-563c-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:04:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4kpq2" for this suite.
Feb 23 13:04:23.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:04:23.609: INFO: namespace: e2e-tests-secrets-4kpq2, resource: bindings, ignored listing per whitelist
Feb 23 13:04:23.648: INFO: namespace e2e-tests-secrets-4kpq2 deletion completed in 8.351493007s

• [SLOW TEST:18.992 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:04:23.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 23 13:04:24.339: INFO: Waiting up to 5m0s for pod "client-containers-039149b5-563d-11ea-8363-0242ac110008" in namespace "e2e-tests-containers-zm95c" to be "success or failure"
Feb 23 13:04:24.435: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 95.794468ms
Feb 23 13:04:26.526: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187212607s
Feb 23 13:04:28.556: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217186555s
Feb 23 13:04:30.578: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239221914s
Feb 23 13:04:32.614: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275119434s
Feb 23 13:04:34.642: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302981589s
STEP: Saw pod success
Feb 23 13:04:34.642: INFO: Pod "client-containers-039149b5-563d-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:04:34.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-039149b5-563d-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 13:04:34.933: INFO: Waiting for pod client-containers-039149b5-563d-11ea-8363-0242ac110008 to disappear
Feb 23 13:04:34.948: INFO: Pod client-containers-039149b5-563d-11ea-8363-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:04:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zm95c" for this suite.
Feb 23 13:04:41.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:04:41.343: INFO: namespace: e2e-tests-containers-zm95c, resource: bindings, ignored listing per whitelist
Feb 23 13:04:41.399: INFO: namespace e2e-tests-containers-zm95c deletion completed in 6.368198396s

• [SLOW TEST:17.750 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:04:41.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0e275eed-563d-11ea-8363-0242ac110008
STEP: Creating secret with name s-test-opt-upd-0e27617d-563d-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0e275eed-563d-11ea-8363-0242ac110008
STEP: Updating secret s-test-opt-upd-0e27617d-563d-11ea-8363-0242ac110008
STEP: Creating secret with name s-test-opt-create-0e276212-563d-11ea-8363-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:05:06.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cpt4d" for this suite.
Feb 23 13:05:46.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:05:47.583: INFO: namespace: e2e-tests-projected-cpt4d, resource: bindings, ignored listing per whitelist
Feb 23 13:05:47.622: INFO: namespace e2e-tests-projected-cpt4d deletion completed in 40.795684006s

• [SLOW TEST:66.221 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:05:47.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 13:05:47.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-4bqpb" to be "success or failure"
Feb 23 13:05:47.951: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 63.088177ms
Feb 23 13:05:51.542: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.654166242s
Feb 23 13:05:53.561: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.67307194s
Feb 23 13:05:55.650: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.762413817s
Feb 23 13:05:57.664: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.776657365s
Feb 23 13:06:00.333: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.444928145s
Feb 23 13:06:02.343: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.45550194s
STEP: Saw pod success
Feb 23 13:06:02.343: INFO: Pod "downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:06:02.350: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 13:06:03.788: INFO: Waiting for pod downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008 to disappear
Feb 23 13:06:05.290: INFO: Pod downwardapi-volume-357d3103-563d-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:06:05.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4bqpb" for this suite.
Feb 23 13:06:11.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:06:11.596: INFO: namespace: e2e-tests-projected-4bqpb, resource: bindings, ignored listing per whitelist
Feb 23 13:06:11.692: INFO: namespace e2e-tests-projected-4bqpb deletion completed in 6.357615222s

• [SLOW TEST:24.070 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:06:11.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 23 13:06:11.966: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:06:29.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-sld84" for this suite.
Feb 23 13:06:37.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:06:37.302: INFO: namespace: e2e-tests-init-container-sld84, resource: bindings, ignored listing per whitelist
Feb 23 13:06:37.340: INFO: namespace e2e-tests-init-container-sld84 deletion completed in 8.180022932s

• [SLOW TEST:25.647 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:06:37.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 23 13:06:37.554: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008" in namespace "e2e-tests-projected-g4978" to be "success or failure"
Feb 23 13:06:37.570: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.333258ms
Feb 23 13:06:39.584: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030320585s
Feb 23 13:06:41.604: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049809039s
Feb 23 13:06:43.800: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246424538s
Feb 23 13:06:46.106: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552006012s
Feb 23 13:06:48.684: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.129988728s
STEP: Saw pod success
Feb 23 13:06:48.684: INFO: Pod "downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:06:48.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008 container client-container: 
STEP: delete the pod
Feb 23 13:06:49.384: INFO: Waiting for pod downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008 to disappear
Feb 23 13:06:49.564: INFO: Pod downwardapi-volume-5314c757-563d-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:06:49.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g4978" for this suite.
Feb 23 13:06:55.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:06:56.116: INFO: namespace: e2e-tests-projected-g4978, resource: bindings, ignored listing per whitelist
Feb 23 13:06:56.158: INFO: namespace e2e-tests-projected-g4978 deletion completed in 6.582279537s

• [SLOW TEST:18.818 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:06:56.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-wlq5
STEP: Creating a pod to test atomic-volume-subpath
Feb 23 13:06:56.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wlq5" in namespace "e2e-tests-subpath-gzmck" to be "success or failure"
Feb 23 13:06:56.525: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.132571ms
Feb 23 13:06:58.988: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49430082s
Feb 23 13:07:00.999: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.504792189s
Feb 23 13:07:03.022: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527559843s
Feb 23 13:07:05.658: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.163866903s
Feb 23 13:07:07.687: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.192813509s
Feb 23 13:07:09.698: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.203884618s
Feb 23 13:07:11.772: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.277776473s
Feb 23 13:07:13.839: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.344916951s
Feb 23 13:07:15.861: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.367290173s
Feb 23 13:07:17.916: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 21.422319021s
Feb 23 13:07:19.940: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 23.446105958s
Feb 23 13:07:21.955: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 25.461171755s
Feb 23 13:07:23.974: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 27.48012958s
Feb 23 13:07:25.992: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 29.498519189s
Feb 23 13:07:28.020: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 31.526252541s
Feb 23 13:07:30.035: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 33.540726253s
Feb 23 13:07:32.103: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 35.608937884s
Feb 23 13:07:34.688: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Running", Reason="", readiness=false. Elapsed: 38.193811345s
Feb 23 13:07:36.833: INFO: Pod "pod-subpath-test-projected-wlq5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.338704232s
STEP: Saw pod success
Feb 23 13:07:36.833: INFO: Pod "pod-subpath-test-projected-wlq5" satisfied condition "success or failure"
Feb 23 13:07:36.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-wlq5 container test-container-subpath-projected-wlq5: 
STEP: delete the pod
Feb 23 13:07:37.625: INFO: Waiting for pod pod-subpath-test-projected-wlq5 to disappear
Feb 23 13:07:37.643: INFO: Pod pod-subpath-test-projected-wlq5 no longer exists
STEP: Deleting pod pod-subpath-test-projected-wlq5
Feb 23 13:07:37.643: INFO: Deleting pod "pod-subpath-test-projected-wlq5" in namespace "e2e-tests-subpath-gzmck"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:07:37.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gzmck" for this suite.
Feb 23 13:07:45.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:07:45.816: INFO: namespace: e2e-tests-subpath-gzmck, resource: bindings, ignored listing per whitelist
Feb 23 13:07:45.886: INFO: namespace e2e-tests-subpath-gzmck deletion completed in 8.227951367s

• [SLOW TEST:49.728 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:07:45.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb 23 13:07:46.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:46.654: INFO: stderr: ""
Feb 23 13:07:46.654: INFO: stdout: "pod/pause created\n"
Feb 23 13:07:46.654: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 23 13:07:46.654: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8qnkw" to be "running and ready"
Feb 23 13:07:46.673: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.131554ms
Feb 23 13:07:48.908: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253474713s
Feb 23 13:07:50.922: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26720994s
Feb 23 13:07:54.701: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046811365s
Feb 23 13:07:56.726: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071483513s
Feb 23 13:07:58.739: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.084846569s
Feb 23 13:07:58.739: INFO: Pod "pause" satisfied condition "running and ready"
Feb 23 13:07:58.739: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 23 13:07:58.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:59.075: INFO: stderr: ""
Feb 23 13:07:59.076: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 23 13:07:59.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:59.301: INFO: stderr: ""
Feb 23 13:07:59.301: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 23 13:07:59.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:59.579: INFO: stderr: ""
Feb 23 13:07:59.579: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 23 13:07:59.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:59.702: INFO: stderr: ""
Feb 23 13:07:59.702: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb 23 13:07:59.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:07:59.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 23 13:07:59.889: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 23 13:07:59.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8qnkw'
Feb 23 13:08:00.161: INFO: stderr: "No resources found.\n"
Feb 23 13:08:00.161: INFO: stdout: ""
Feb 23 13:08:00.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8qnkw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 23 13:08:00.274: INFO: stderr: ""
Feb 23 13:08:00.274: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:08:00.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8qnkw" for this suite.
Feb 23 13:08:07.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:08:07.751: INFO: namespace: e2e-tests-kubectl-8qnkw, resource: bindings, ignored listing per whitelist
Feb 23 13:08:07.770: INFO: namespace e2e-tests-kubectl-8qnkw deletion completed in 7.488236943s

• [SLOW TEST:21.884 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:08:07.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 23 13:08:08.068: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:08:08.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m6shz" for this suite.
Feb 23 13:08:14.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:08:14.307: INFO: namespace: e2e-tests-kubectl-m6shz, resource: bindings, ignored listing per whitelist
Feb 23 13:08:14.385: INFO: namespace e2e-tests-kubectl-m6shz deletion completed in 6.155643623s

• [SLOW TEST:6.614 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:08:14.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xvp9g
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xvp9g
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xvp9g
Feb 23 13:08:14.855: INFO: Found 0 stateful pods, waiting for 1
Feb 23 13:08:25.448: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 13:08:34.895: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 23 13:08:34.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 13:08:36.038: INFO: stderr: "I0223 13:08:35.241716    3667 log.go:172] (0xc000138580) (0xc0005a7220) Create stream\nI0223 13:08:35.242209    3667 log.go:172] (0xc000138580) (0xc0005a7220) Stream added, broadcasting: 1\nI0223 13:08:35.253820    3667 log.go:172] (0xc000138580) Reply frame received for 1\nI0223 13:08:35.254095    3667 log.go:172] (0xc000138580) (0xc000714000) Create stream\nI0223 13:08:35.254162    3667 log.go:172] (0xc000138580) (0xc000714000) Stream added, broadcasting: 3\nI0223 13:08:35.258955    3667 log.go:172] (0xc000138580) Reply frame received for 3\nI0223 13:08:35.259155    3667 log.go:172] (0xc000138580) (0xc0005a72c0) Create stream\nI0223 13:08:35.259181    3667 log.go:172] (0xc000138580) (0xc0005a72c0) Stream added, broadcasting: 5\nI0223 13:08:35.261242    3667 log.go:172] (0xc000138580) Reply frame received for 5\nI0223 13:08:35.546058    3667 log.go:172] (0xc000138580) Data frame received for 3\nI0223 13:08:35.546326    3667 log.go:172] (0xc000714000) (3) Data frame handling\nI0223 13:08:35.546407    3667 log.go:172] (0xc000714000) (3) Data frame sent\nI0223 13:08:36.012097    3667 log.go:172] (0xc000138580) Data frame received for 1\nI0223 13:08:36.012706    3667 log.go:172] (0xc000138580) (0xc000714000) Stream removed, broadcasting: 3\nI0223 13:08:36.013072    3667 log.go:172] (0xc0005a7220) (1) Data frame handling\nI0223 13:08:36.013139    3667 log.go:172] (0xc0005a7220) (1) Data frame sent\nI0223 13:08:36.013403    3667 log.go:172] (0xc000138580) (0xc0005a72c0) Stream removed, broadcasting: 5\nI0223 13:08:36.013651    3667 log.go:172] (0xc000138580) (0xc0005a7220) Stream removed, broadcasting: 1\nI0223 13:08:36.013848    3667 log.go:172] (0xc000138580) Go away received\nI0223 13:08:36.015021    3667 log.go:172] (0xc000138580) (0xc0005a7220) Stream removed, broadcasting: 1\nI0223 13:08:36.015057    3667 log.go:172] (0xc000138580) (0xc000714000) Stream removed, broadcasting: 3\nI0223 13:08:36.015091    3667 log.go:172] (0xc000138580) (0xc0005a72c0) Stream removed, broadcasting: 5\n"
Feb 23 13:08:36.038: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 13:08:36.038: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 13:08:36.060: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 13:08:36.060: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 13:08:36.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999381s
Feb 23 13:08:37.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.852313497s
Feb 23 13:08:38.266: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.842123956s
Feb 23 13:08:39.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.824357137s
Feb 23 13:08:40.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.806018275s
Feb 23 13:08:41.321: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.787245127s
Feb 23 13:08:42.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.768406474s
Feb 23 13:08:43.379: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.751100526s
Feb 23 13:08:44.391: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.710380366s
Feb 23 13:08:45.430: INFO: Verifying statefulset ss doesn't scale past 1 for another 698.976892ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xvp9g
Feb 23 13:08:46.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 13:08:47.109: INFO: stderr: "I0223 13:08:46.666591    3689 log.go:172] (0xc0006d02c0) (0xc00066b360) Create stream\nI0223 13:08:46.666955    3689 log.go:172] (0xc0006d02c0) (0xc00066b360) Stream added, broadcasting: 1\nI0223 13:08:46.673054    3689 log.go:172] (0xc0006d02c0) Reply frame received for 1\nI0223 13:08:46.673100    3689 log.go:172] (0xc0006d02c0) (0xc00070e000) Create stream\nI0223 13:08:46.673114    3689 log.go:172] (0xc0006d02c0) (0xc00070e000) Stream added, broadcasting: 3\nI0223 13:08:46.674628    3689 log.go:172] (0xc0006d02c0) Reply frame received for 3\nI0223 13:08:46.674655    3689 log.go:172] (0xc0006d02c0) (0xc000546000) Create stream\nI0223 13:08:46.674664    3689 log.go:172] (0xc0006d02c0) (0xc000546000) Stream added, broadcasting: 5\nI0223 13:08:46.675598    3689 log.go:172] (0xc0006d02c0) Reply frame received for 5\nI0223 13:08:46.900011    3689 log.go:172] (0xc0006d02c0) Data frame received for 3\nI0223 13:08:46.900403    3689 log.go:172] (0xc00070e000) (3) Data frame handling\nI0223 13:08:46.900448    3689 log.go:172] (0xc00070e000) (3) Data frame sent\nI0223 13:08:47.098284    3689 log.go:172] (0xc0006d02c0) Data frame received for 1\nI0223 13:08:47.098359    3689 log.go:172] (0xc0006d02c0) (0xc00070e000) Stream removed, broadcasting: 3\nI0223 13:08:47.098430    3689 log.go:172] (0xc00066b360) (1) Data frame handling\nI0223 13:08:47.098450    3689 log.go:172] (0xc0006d02c0) (0xc000546000) Stream removed, broadcasting: 5\nI0223 13:08:47.098463    3689 log.go:172] (0xc00066b360) (1) Data frame sent\nI0223 13:08:47.098475    3689 log.go:172] (0xc0006d02c0) (0xc00066b360) Stream removed, broadcasting: 1\nI0223 13:08:47.099791    3689 log.go:172] (0xc0006d02c0) (0xc00066b360) Stream removed, broadcasting: 1\nI0223 13:08:47.099891    3689 log.go:172] (0xc0006d02c0) (0xc00070e000) Stream removed, broadcasting: 3\nI0223 13:08:47.099902    3689 log.go:172] (0xc0006d02c0) (0xc000546000) Stream removed, broadcasting: 5\n"
Feb 23 13:08:47.109: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 13:08:47.109: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 13:08:47.125: INFO: Found 1 stateful pods, waiting for 3
Feb 23 13:08:57.143: INFO: Found 2 stateful pods, waiting for 3
Feb 23 13:09:07.265: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:09:07.265: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:09:07.265: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 23 13:09:17.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:09:17.187: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 23 13:09:17.187: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 23 13:09:17.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 13:09:17.847: INFO: stderr: "I0223 13:09:17.416601    3710 log.go:172] (0xc00070a370) (0xc00072c640) Create stream\nI0223 13:09:17.416931    3710 log.go:172] (0xc00070a370) (0xc00072c640) Stream added, broadcasting: 1\nI0223 13:09:17.422075    3710 log.go:172] (0xc00070a370) Reply frame received for 1\nI0223 13:09:17.422129    3710 log.go:172] (0xc00070a370) (0xc000780be0) Create stream\nI0223 13:09:17.422144    3710 log.go:172] (0xc00070a370) (0xc000780be0) Stream added, broadcasting: 3\nI0223 13:09:17.423982    3710 log.go:172] (0xc00070a370) Reply frame received for 3\nI0223 13:09:17.424032    3710 log.go:172] (0xc00070a370) (0xc000590000) Create stream\nI0223 13:09:17.424040    3710 log.go:172] (0xc00070a370) (0xc000590000) Stream added, broadcasting: 5\nI0223 13:09:17.425244    3710 log.go:172] (0xc00070a370) Reply frame received for 5\nI0223 13:09:17.587933    3710 log.go:172] (0xc00070a370) Data frame received for 3\nI0223 13:09:17.588121    3710 log.go:172] (0xc000780be0) (3) Data frame handling\nI0223 13:09:17.588192    3710 log.go:172] (0xc000780be0) (3) Data frame sent\nI0223 13:09:17.831683    3710 log.go:172] (0xc00070a370) Data frame received for 1\nI0223 13:09:17.831944    3710 log.go:172] (0xc00070a370) (0xc000590000) Stream removed, broadcasting: 5\nI0223 13:09:17.832060    3710 log.go:172] (0xc00072c640) (1) Data frame handling\nI0223 13:09:17.832094    3710 log.go:172] (0xc00072c640) (1) Data frame sent\nI0223 13:09:17.832171    3710 log.go:172] (0xc00070a370) (0xc000780be0) Stream removed, broadcasting: 3\nI0223 13:09:17.832255    3710 log.go:172] (0xc00070a370) (0xc00072c640) Stream removed, broadcasting: 1\nI0223 13:09:17.832273    3710 log.go:172] (0xc00070a370) Go away received\nI0223 13:09:17.833468    3710 log.go:172] (0xc00070a370) (0xc00072c640) Stream removed, broadcasting: 1\nI0223 13:09:17.833482    3710 log.go:172] (0xc00070a370) (0xc000780be0) Stream removed, broadcasting: 3\nI0223 13:09:17.833495    3710 log.go:172] (0xc00070a370) (0xc000590000) Stream removed, broadcasting: 5\n"
Feb 23 13:09:17.847: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 13:09:17.848: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 13:09:17.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 13:09:18.414: INFO: stderr: "I0223 13:09:18.062142    3733 log.go:172] (0xc0007f22c0) (0xc000511360) Create stream\nI0223 13:09:18.062370    3733 log.go:172] (0xc0007f22c0) (0xc000511360) Stream added, broadcasting: 1\nI0223 13:09:18.067166    3733 log.go:172] (0xc0007f22c0) Reply frame received for 1\nI0223 13:09:18.067210    3733 log.go:172] (0xc0007f22c0) (0xc000440000) Create stream\nI0223 13:09:18.067225    3733 log.go:172] (0xc0007f22c0) (0xc000440000) Stream added, broadcasting: 3\nI0223 13:09:18.068216    3733 log.go:172] (0xc0007f22c0) Reply frame received for 3\nI0223 13:09:18.068238    3733 log.go:172] (0xc0007f22c0) (0xc00050e000) Create stream\nI0223 13:09:18.068246    3733 log.go:172] (0xc0007f22c0) (0xc00050e000) Stream added, broadcasting: 5\nI0223 13:09:18.068977    3733 log.go:172] (0xc0007f22c0) Reply frame received for 5\nI0223 13:09:18.205228    3733 log.go:172] (0xc0007f22c0) Data frame received for 3\nI0223 13:09:18.205302    3733 log.go:172] (0xc000440000) (3) Data frame handling\nI0223 13:09:18.205327    3733 log.go:172] (0xc000440000) (3) Data frame sent\nI0223 13:09:18.399891    3733 log.go:172] (0xc0007f22c0) (0xc000440000) Stream removed, broadcasting: 3\nI0223 13:09:18.400429    3733 log.go:172] (0xc0007f22c0) Data frame received for 1\nI0223 13:09:18.400643    3733 log.go:172] (0xc0007f22c0) (0xc00050e000) Stream removed, broadcasting: 5\nI0223 13:09:18.400798    3733 log.go:172] (0xc000511360) (1) Data frame handling\nI0223 13:09:18.400844    3733 log.go:172] (0xc000511360) (1) Data frame sent\nI0223 13:09:18.400857    3733 log.go:172] (0xc0007f22c0) (0xc000511360) Stream removed, broadcasting: 1\nI0223 13:09:18.400877    3733 log.go:172] (0xc0007f22c0) Go away received\nI0223 13:09:18.402400    3733 log.go:172] (0xc0007f22c0) (0xc000511360) Stream removed, broadcasting: 1\nI0223 13:09:18.402459    3733 log.go:172] (0xc0007f22c0) (0xc000440000) Stream removed, broadcasting: 3\nI0223 13:09:18.402525    3733 log.go:172] (0xc0007f22c0) (0xc00050e000) Stream removed, broadcasting: 5\n"
Feb 23 13:09:18.414: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 13:09:18.414: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 13:09:18.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 23 13:09:19.340: INFO: stderr: "I0223 13:09:18.919624    3755 log.go:172] (0xc000778160) (0xc0006ec5a0) Create stream\nI0223 13:09:18.920170    3755 log.go:172] (0xc000778160) (0xc0006ec5a0) Stream added, broadcasting: 1\nI0223 13:09:18.936531    3755 log.go:172] (0xc000778160) Reply frame received for 1\nI0223 13:09:18.936627    3755 log.go:172] (0xc000778160) (0xc0006ec640) Create stream\nI0223 13:09:18.936638    3755 log.go:172] (0xc000778160) (0xc0006ec640) Stream added, broadcasting: 3\nI0223 13:09:18.939254    3755 log.go:172] (0xc000778160) Reply frame received for 3\nI0223 13:09:18.939349    3755 log.go:172] (0xc000778160) (0xc0007dad20) Create stream\nI0223 13:09:18.939364    3755 log.go:172] (0xc000778160) (0xc0007dad20) Stream added, broadcasting: 5\nI0223 13:09:18.940852    3755 log.go:172] (0xc000778160) Reply frame received for 5\nI0223 13:09:19.198232    3755 log.go:172] (0xc000778160) Data frame received for 3\nI0223 13:09:19.198294    3755 log.go:172] (0xc0006ec640) (3) Data frame handling\nI0223 13:09:19.198320    3755 log.go:172] (0xc0006ec640) (3) Data frame sent\nI0223 13:09:19.325246    3755 log.go:172] (0xc000778160) Data frame received for 1\nI0223 13:09:19.325367    3755 log.go:172] (0xc0006ec5a0) (1) Data frame handling\nI0223 13:09:19.325394    3755 log.go:172] (0xc0006ec5a0) (1) Data frame sent\nI0223 13:09:19.328016    3755 log.go:172] (0xc000778160) (0xc0006ec5a0) Stream removed, broadcasting: 1\nI0223 13:09:19.328322    3755 log.go:172] (0xc000778160) (0xc0007dad20) Stream removed, broadcasting: 5\nI0223 13:09:19.328408    3755 log.go:172] (0xc000778160) (0xc0006ec640) Stream removed, broadcasting: 3\nI0223 13:09:19.328970    3755 log.go:172] (0xc000778160) (0xc0006ec5a0) Stream removed, broadcasting: 1\nI0223 13:09:19.328989    3755 log.go:172] (0xc000778160) (0xc0006ec640) Stream removed, broadcasting: 3\nI0223 13:09:19.328994    3755 log.go:172] (0xc000778160) (0xc0007dad20) Stream removed, broadcasting: 5\nI0223 13:09:19.329744    3755 log.go:172] (0xc000778160) Go away received\n"
Feb 23 13:09:19.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 23 13:09:19.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 23 13:09:19.340: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 13:09:19.352: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 23 13:09:29.394: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 13:09:29.394: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 13:09:29.394: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 23 13:09:29.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999439s
Feb 23 13:09:30.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981668775s
Feb 23 13:09:31.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.964172602s
Feb 23 13:09:32.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.951691968s
Feb 23 13:09:33.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.916759365s
Feb 23 13:09:34.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.895814744s
Feb 23 13:09:35.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.876159436s
Feb 23 13:09:36.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.860589278s
Feb 23 13:09:37.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.813381215s
Feb 23 13:09:40.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 771.038876ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xvp9g
Feb 23 13:09:41.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 13:09:42.768: INFO: stderr: "I0223 13:09:42.187808    3777 log.go:172] (0xc000154630) (0xc000585680) Create stream\nI0223 13:09:42.188146    3777 log.go:172] (0xc000154630) (0xc000585680) Stream added, broadcasting: 1\nI0223 13:09:42.196489    3777 log.go:172] (0xc000154630) Reply frame received for 1\nI0223 13:09:42.196541    3777 log.go:172] (0xc000154630) (0xc000585720) Create stream\nI0223 13:09:42.196553    3777 log.go:172] (0xc000154630) (0xc000585720) Stream added, broadcasting: 3\nI0223 13:09:42.197961    3777 log.go:172] (0xc000154630) Reply frame received for 3\nI0223 13:09:42.197986    3777 log.go:172] (0xc000154630) (0xc0003d9220) Create stream\nI0223 13:09:42.197993    3777 log.go:172] (0xc000154630) (0xc0003d9220) Stream added, broadcasting: 5\nI0223 13:09:42.198975    3777 log.go:172] (0xc000154630) Reply frame received for 5\nI0223 13:09:42.420537    3777 log.go:172] (0xc000154630) Data frame received for 3\nI0223 13:09:42.420620    3777 log.go:172] (0xc000585720) (3) Data frame handling\nI0223 13:09:42.420640    3777 log.go:172] (0xc000585720) (3) Data frame sent\nI0223 13:09:42.748187    3777 log.go:172] (0xc000154630) Data frame received for 1\nI0223 13:09:42.748525    3777 log.go:172] (0xc000154630) (0xc000585720) Stream removed, broadcasting: 3\nI0223 13:09:42.748597    3777 log.go:172] (0xc000585680) (1) Data frame handling\nI0223 13:09:42.748645    3777 log.go:172] (0xc000585680) (1) Data frame sent\nI0223 13:09:42.748694    3777 log.go:172] (0xc000154630) (0xc0003d9220) Stream removed, broadcasting: 5\nI0223 13:09:42.748754    3777 log.go:172] (0xc000154630) (0xc000585680) Stream removed, broadcasting: 1\nI0223 13:09:42.748770    3777 log.go:172] (0xc000154630) Go away received\nI0223 13:09:42.750880    3777 log.go:172] (0xc000154630) (0xc000585680) Stream removed, broadcasting: 1\nI0223 13:09:42.751180    3777 log.go:172] (0xc000154630) (0xc000585720) Stream removed, broadcasting: 3\nI0223 13:09:42.751214    3777 log.go:172] (0xc000154630) (0xc0003d9220) Stream removed, broadcasting: 5\n"
Feb 23 13:09:42.769: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 13:09:42.769: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 13:09:42.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 13:09:43.496: INFO: stderr: "I0223 13:09:43.256958    3798 log.go:172] (0xc00013a0b0) (0xc0006ee5a0) Create stream\nI0223 13:09:43.257177    3798 log.go:172] (0xc00013a0b0) (0xc0006ee5a0) Stream added, broadcasting: 1\nI0223 13:09:43.263187    3798 log.go:172] (0xc00013a0b0) Reply frame received for 1\nI0223 13:09:43.263240    3798 log.go:172] (0xc00013a0b0) (0xc0006ee640) Create stream\nI0223 13:09:43.263261    3798 log.go:172] (0xc00013a0b0) (0xc0006ee640) Stream added, broadcasting: 3\nI0223 13:09:43.264078    3798 log.go:172] (0xc00013a0b0) Reply frame received for 3\nI0223 13:09:43.264104    3798 log.go:172] (0xc00013a0b0) (0xc00056edc0) Create stream\nI0223 13:09:43.264115    3798 log.go:172] (0xc00013a0b0) (0xc00056edc0) Stream added, broadcasting: 5\nI0223 13:09:43.265027    3798 log.go:172] (0xc00013a0b0) Reply frame received for 5\nI0223 13:09:43.361965    3798 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0223 13:09:43.362052    3798 log.go:172] (0xc0006ee640) (3) Data frame handling\nI0223 13:09:43.362070    3798 log.go:172] (0xc0006ee640) (3) Data frame sent\nI0223 13:09:43.488886    3798 log.go:172] (0xc00013a0b0) Data frame received for 1\nI0223 13:09:43.488964    3798 log.go:172] (0xc0006ee5a0) (1) Data frame handling\nI0223 13:09:43.488985    3798 log.go:172] (0xc0006ee5a0) (1) Data frame sent\nI0223 13:09:43.489000    3798 log.go:172] (0xc00013a0b0) (0xc0006ee5a0) Stream removed, broadcasting: 1\nI0223 13:09:43.489600    3798 log.go:172] (0xc00013a0b0) (0xc0006ee640) Stream removed, broadcasting: 3\nI0223 13:09:43.489667    3798 log.go:172] (0xc00013a0b0) (0xc00056edc0) Stream removed, broadcasting: 5\nI0223 13:09:43.489728    3798 log.go:172] (0xc00013a0b0) (0xc0006ee5a0) Stream removed, broadcasting: 1\nI0223 13:09:43.489747    3798 log.go:172] (0xc00013a0b0) (0xc0006ee640) Stream removed, broadcasting: 3\nI0223 13:09:43.489757    3798 log.go:172] (0xc00013a0b0) (0xc00056edc0) Stream removed, broadcasting: 5\nI0223 13:09:43.489832    3798 log.go:172] (0xc00013a0b0) Go away received\n"
Feb 23 13:09:43.496: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 13:09:43.496: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 13:09:43.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvp9g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 23 13:09:44.060: INFO: stderr: "I0223 13:09:43.669804    3819 log.go:172] (0xc0006fe370) (0xc000722640) Create stream\nI0223 13:09:43.670041    3819 log.go:172] (0xc0006fe370) (0xc000722640) Stream added, broadcasting: 1\nI0223 13:09:43.676611    3819 log.go:172] (0xc0006fe370) Reply frame received for 1\nI0223 13:09:43.676678    3819 log.go:172] (0xc0006fe370) (0xc000662c80) Create stream\nI0223 13:09:43.676694    3819 log.go:172] (0xc0006fe370) (0xc000662c80) Stream added, broadcasting: 3\nI0223 13:09:43.678752    3819 log.go:172] (0xc0006fe370) Reply frame received for 3\nI0223 13:09:43.678830    3819 log.go:172] (0xc0006fe370) (0xc000662dc0) Create stream\nI0223 13:09:43.678838    3819 log.go:172] (0xc0006fe370) (0xc000662dc0) Stream added, broadcasting: 5\nI0223 13:09:43.680051    3819 log.go:172] (0xc0006fe370) Reply frame received for 5\nI0223 13:09:43.829775    3819 log.go:172] (0xc0006fe370) Data frame received for 3\nI0223 13:09:43.829925    3819 log.go:172] (0xc000662c80) (3) Data frame handling\nI0223 13:09:43.829953    3819 log.go:172] (0xc000662c80) (3) Data frame sent\nI0223 13:09:44.051367    3819 log.go:172] (0xc0006fe370) Data frame received for 1\nI0223 13:09:44.051563    3819 log.go:172] (0xc0006fe370) (0xc000662dc0) Stream removed, broadcasting: 5\nI0223 13:09:44.051672    3819 log.go:172] (0xc000722640) (1) Data frame handling\nI0223 13:09:44.051693    3819 log.go:172] (0xc0006fe370) (0xc000662c80) Stream removed, broadcasting: 3\nI0223 13:09:44.051743    3819 log.go:172] (0xc000722640) (1) Data frame sent\nI0223 13:09:44.051761    3819 log.go:172] (0xc0006fe370) (0xc000722640) Stream removed, broadcasting: 1\nI0223 13:09:44.051782    3819 log.go:172] (0xc0006fe370) Go away received\nI0223 13:09:44.052497    3819 log.go:172] (0xc0006fe370) (0xc000722640) Stream removed, broadcasting: 1\nI0223 13:09:44.052506    3819 log.go:172] (0xc0006fe370) (0xc000662c80) Stream removed, broadcasting: 3\nI0223 13:09:44.052510    3819 log.go:172] (0xc0006fe370) (0xc000662dc0) Stream removed, broadcasting: 5\n"
Feb 23 13:09:44.060: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 23 13:09:44.060: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 23 13:09:44.060: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 23 13:10:24.297: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xvp9g
Feb 23 13:10:24.305: INFO: Scaling statefulset ss to 0
Feb 23 13:10:24.319: INFO: Waiting for statefulset status.replicas updated to 0
Feb 23 13:10:24.325: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:10:24.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xvp9g" for this suite.
Feb 23 13:10:32.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:10:32.706: INFO: namespace: e2e-tests-statefulset-xvp9g, resource: bindings, ignored listing per whitelist
Feb 23 13:10:32.883: INFO: namespace e2e-tests-statefulset-xvp9g deletion completed in 8.513795328s

• [SLOW TEST:138.497 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:10:32.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 13:10:33.285: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 23 13:10:33.351: INFO: Number of nodes with available pods: 0
Feb 23 13:10:33.351: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 23 13:10:33.587: INFO: Number of nodes with available pods: 0
Feb 23 13:10:33.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:34.614: INFO: Number of nodes with available pods: 0
Feb 23 13:10:34.614: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:36.376: INFO: Number of nodes with available pods: 0
Feb 23 13:10:36.376: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:37.033: INFO: Number of nodes with available pods: 0
Feb 23 13:10:37.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:38.096: INFO: Number of nodes with available pods: 0
Feb 23 13:10:38.097: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:38.624: INFO: Number of nodes with available pods: 0
Feb 23 13:10:38.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:39.635: INFO: Number of nodes with available pods: 0
Feb 23 13:10:39.635: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:40.620: INFO: Number of nodes with available pods: 0
Feb 23 13:10:40.620: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:42.938: INFO: Number of nodes with available pods: 0
Feb 23 13:10:42.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:43.971: INFO: Number of nodes with available pods: 0
Feb 23 13:10:43.971: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:44.650: INFO: Number of nodes with available pods: 0
Feb 23 13:10:44.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:45.605: INFO: Number of nodes with available pods: 0
Feb 23 13:10:45.605: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:46.625: INFO: Number of nodes with available pods: 0
Feb 23 13:10:46.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:47.596: INFO: Number of nodes with available pods: 1
Feb 23 13:10:47.596: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 23 13:10:47.660: INFO: Number of nodes with available pods: 1
Feb 23 13:10:47.660: INFO: Number of running nodes: 0, number of available pods: 1
Feb 23 13:10:48.698: INFO: Number of nodes with available pods: 0
Feb 23 13:10:48.698: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 23 13:10:48.749: INFO: Number of nodes with available pods: 0
Feb 23 13:10:48.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:50.218: INFO: Number of nodes with available pods: 0
Feb 23 13:10:50.218: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:50.765: INFO: Number of nodes with available pods: 0
Feb 23 13:10:50.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:51.761: INFO: Number of nodes with available pods: 0
Feb 23 13:10:51.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:52.774: INFO: Number of nodes with available pods: 0
Feb 23 13:10:52.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:53.910: INFO: Number of nodes with available pods: 0
Feb 23 13:10:53.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:54.770: INFO: Number of nodes with available pods: 0
Feb 23 13:10:54.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:55.773: INFO: Number of nodes with available pods: 0
Feb 23 13:10:55.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:57.733: INFO: Number of nodes with available pods: 0
Feb 23 13:10:57.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:58.206: INFO: Number of nodes with available pods: 0
Feb 23 13:10:58.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:58.830: INFO: Number of nodes with available pods: 0
Feb 23 13:10:58.830: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:10:59.781: INFO: Number of nodes with available pods: 0
Feb 23 13:10:59.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:00.832: INFO: Number of nodes with available pods: 0
Feb 23 13:11:00.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:01.767: INFO: Number of nodes with available pods: 0
Feb 23 13:11:01.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:05.513: INFO: Number of nodes with available pods: 0
Feb 23 13:11:05.513: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:05.965: INFO: Number of nodes with available pods: 0
Feb 23 13:11:05.965: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:07.016: INFO: Number of nodes with available pods: 0
Feb 23 13:11:07.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:07.766: INFO: Number of nodes with available pods: 0
Feb 23 13:11:07.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:08.777: INFO: Number of nodes with available pods: 0
Feb 23 13:11:08.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 23 13:11:09.760: INFO: Number of nodes with available pods: 1
Feb 23 13:11:09.760: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-k77b5, will wait for the garbage collector to delete the pods
Feb 23 13:11:09.899: INFO: Deleting DaemonSet.extensions daemon-set took: 71.080724ms
Feb 23 13:11:09.999: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.485477ms
Feb 23 13:11:22.644: INFO: Number of nodes with available pods: 0
Feb 23 13:11:22.644: INFO: Number of running nodes: 0, number of available pods: 0
Feb 23 13:11:22.658: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-k77b5/daemonsets","resourceVersion":"22651669"},"items":null}

Feb 23 13:11:22.665: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-k77b5/pods","resourceVersion":"22651669"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:11:22.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-k77b5" for this suite.
Feb 23 13:11:30.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:11:31.030: INFO: namespace: e2e-tests-daemonsets-k77b5, resource: bindings, ignored listing per whitelist
Feb 23 13:11:31.108: INFO: namespace e2e-tests-daemonsets-k77b5 deletion completed in 8.323209969s

• [SLOW TEST:58.224 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:11:31.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-9z8lt/configmap-test-02456364-563e-11ea-8363-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 23 13:11:31.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008" in namespace "e2e-tests-configmap-9z8lt" to be "success or failure"
Feb 23 13:11:31.576: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.039484ms
Feb 23 13:11:33.815: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261079374s
Feb 23 13:11:35.842: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287692609s
Feb 23 13:11:38.709: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.154960609s
Feb 23 13:11:40.971: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.417281365s
Feb 23 13:11:43.045: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.490891851s
Feb 23 13:11:45.357: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.802732877s
Feb 23 13:11:47.646: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.091630874s
STEP: Saw pod success
Feb 23 13:11:47.646: INFO: Pod "pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:11:47.655: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008 container env-test: 
STEP: delete the pod
Feb 23 13:11:48.050: INFO: Waiting for pod pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008 to disappear
Feb 23 13:11:48.075: INFO: Pod pod-configmaps-02466fb4-563e-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:11:48.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9z8lt" for this suite.
Feb 23 13:11:56.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:11:57.320: INFO: namespace: e2e-tests-configmap-9z8lt, resource: bindings, ignored listing per whitelist
Feb 23 13:11:57.332: INFO: namespace e2e-tests-configmap-9z8lt deletion completed in 9.199806855s

• [SLOW TEST:26.224 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:11:57.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:12:14.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qbwgt" for this suite.
Feb 23 13:12:40.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:12:40.951: INFO: namespace: e2e-tests-replication-controller-qbwgt, resource: bindings, ignored listing per whitelist
Feb 23 13:12:41.017: INFO: namespace e2e-tests-replication-controller-qbwgt deletion completed in 26.255481155s

• [SLOW TEST:43.684 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:12:41.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2be37fbb-563e-11ea-8363-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2be37fbb-563e-11ea-8363-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:12:57.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8hjmx" for this suite.
Feb 23 13:13:21.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:13:21.571: INFO: namespace: e2e-tests-projected-8hjmx, resource: bindings, ignored listing per whitelist
Feb 23 13:13:21.807: INFO: namespace e2e-tests-projected-8hjmx deletion completed in 24.317158902s

• [SLOW TEST:40.790 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:13:21.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 23 13:13:22.157: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 23 13:13:22.222: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hxz5f/daemonsets","resourceVersion":"22651910"},"items":null}

Feb 23 13:13:22.256: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hxz5f/pods","resourceVersion":"22651910"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:13:22.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hxz5f" for this suite.
Feb 23 13:13:28.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:13:28.556: INFO: namespace: e2e-tests-daemonsets-hxz5f, resource: bindings, ignored listing per whitelist
Feb 23 13:13:28.778: INFO: namespace e2e-tests-daemonsets-hxz5f deletion completed in 6.487485731s

S [SKIPPING] [6.971 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 23 13:13:22.157: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:13:28.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 23 13:13:28.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 23 13:13:29.171: INFO: stderr: ""
Feb 23 13:13:29.171: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:13:29.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n6ddl" for this suite.
Feb 23 13:13:35.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:13:35.372: INFO: namespace: e2e-tests-kubectl-n6ddl, resource: bindings, ignored listing per whitelist
Feb 23 13:13:35.379: INFO: namespace e2e-tests-kubectl-n6ddl deletion completed in 6.181487244s

• [SLOW TEST:6.600 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:13:35.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 23 13:13:35.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 23 13:13:37.565: INFO: stderr: ""
Feb 23 13:13:37.565: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:13:37.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-btdrv" for this suite.
Feb 23 13:13:43.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:13:43.757: INFO: namespace: e2e-tests-kubectl-btdrv, resource: bindings, ignored listing per whitelist
Feb 23 13:13:44.058: INFO: namespace e2e-tests-kubectl-btdrv deletion completed in 6.475765335s

• [SLOW TEST:8.679 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:13:44.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 23 13:13:55.238: INFO: Successfully updated pod "annotationupdate519165ea-563e-11ea-8363-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:13:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b7plt" for this suite.
Feb 23 13:14:19.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:14:19.829: INFO: namespace: e2e-tests-downward-api-b7plt, resource: bindings, ignored listing per whitelist
Feb 23 13:14:19.921: INFO: namespace e2e-tests-downward-api-b7plt deletion completed in 22.21130308s

• [SLOW TEST:35.862 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:14:19.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 23 13:14:20.077: INFO: Waiting up to 5m0s for pod "pod-66c9a757-563e-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-jzgtm" to be "success or failure"
Feb 23 13:14:20.086: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848091ms
Feb 23 13:14:22.110: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033301702s
Feb 23 13:14:24.191: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113960905s
Feb 23 13:14:26.259: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181595974s
Feb 23 13:14:28.706: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629050664s
Feb 23 13:14:30.809: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.73159963s
STEP: Saw pod success
Feb 23 13:14:30.809: INFO: Pod "pod-66c9a757-563e-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:14:30.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-66c9a757-563e-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 13:14:31.013: INFO: Waiting for pod pod-66c9a757-563e-11ea-8363-0242ac110008 to disappear
Feb 23 13:14:31.028: INFO: Pod pod-66c9a757-563e-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:14:31.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jzgtm" for this suite.
Feb 23 13:14:37.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:14:37.391: INFO: namespace: e2e-tests-emptydir-jzgtm, resource: bindings, ignored listing per whitelist
Feb 23 13:14:37.408: INFO: namespace e2e-tests-emptydir-jzgtm deletion completed in 6.369784345s

• [SLOW TEST:17.486 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 23 13:14:37.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 23 13:14:37.628: INFO: Waiting up to 5m0s for pod "pod-713e6291-563e-11ea-8363-0242ac110008" in namespace "e2e-tests-emptydir-hgkmz" to be "success or failure"
Feb 23 13:14:37.644: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.760924ms
Feb 23 13:14:39.659: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031370564s
Feb 23 13:14:41.671: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042889441s
Feb 23 13:14:43.766: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138255793s
Feb 23 13:14:45.778: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149585138s
Feb 23 13:14:47.789: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160892621s
STEP: Saw pod success
Feb 23 13:14:47.789: INFO: Pod "pod-713e6291-563e-11ea-8363-0242ac110008" satisfied condition "success or failure"
Feb 23 13:14:47.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-713e6291-563e-11ea-8363-0242ac110008 container test-container: 
STEP: delete the pod
Feb 23 13:14:49.274: INFO: Waiting for pod pod-713e6291-563e-11ea-8363-0242ac110008 to disappear
Feb 23 13:14:49.292: INFO: Pod pod-713e6291-563e-11ea-8363-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 23 13:14:49.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hgkmz" for this suite.
Feb 23 13:14:55.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 23 13:14:55.452: INFO: namespace: e2e-tests-emptydir-hgkmz, resource: bindings, ignored listing per whitelist
Feb 23 13:14:55.463: INFO: namespace e2e-tests-emptydir-hgkmz deletion completed in 6.162443252s

• [SLOW TEST:18.055 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
Feb 23 13:14:55.463: INFO: Running AfterSuite actions on all nodes
Feb 23 13:14:55.463: INFO: Running AfterSuite actions on node 1
Feb 23 13:14:55.463: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8861.882 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS