I1223 10:47:18.054810 8 e2e.go:224] Starting e2e run "964d56a7-2571-11ea-a9d2-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577098037 - Will randomize all specs Will run 201 of 2164 specs Dec 23 10:47:18.342: INFO: >>> kubeConfig: /root/.kube/config Dec 23 10:47:18.345: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 23 10:47:18.372: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 23 10:47:18.415: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 23 10:47:18.415: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 23 10:47:18.415: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 23 10:47:18.424: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 23 10:47:18.424: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 23 10:47:18.424: INFO: e2e test version: v1.13.12 Dec 23 10:47:18.426: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:47:18.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Dec 23 10:47:18.859: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 10:47:19.027: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"975a3326-2571-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00105a0b2), BlockOwnerDeletion:(*bool)(0xc00105a0b3)}} Dec 23 10:47:19.062: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"97555510-2571-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00105a252), BlockOwnerDeletion:(*bool)(0xc00105a253)}} Dec 23 10:47:19.085: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"975663a1-2571-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00105a3ea), BlockOwnerDeletion:(*bool)(0xc00105a3eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:47:24.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-c7msb" for this suite. Dec 23 10:47:30.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:47:30.333: INFO: namespace: e2e-tests-gc-c7msb, resource: bindings, ignored listing per whitelist Dec 23 10:47:30.427: INFO: namespace e2e-tests-gc-c7msb deletion completed in 6.2434959s • [SLOW TEST:12.001 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:47:30.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 10:47:30.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lxx7r' Dec 23 10:47:32.775: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 23 10:47:32.775: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Dec 23 10:47:34.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lxx7r' Dec 23 10:47:35.125: INFO: stderr: "" Dec 23 10:47:35.125: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:47:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lxx7r" for this suite. Dec 23 10:47:41.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:47:41.505: INFO: namespace: e2e-tests-kubectl-lxx7r, resource: bindings, ignored listing per whitelist Dec 23 10:47:41.513: INFO: namespace e2e-tests-kubectl-lxx7r deletion completed in 6.369178631s • [SLOW TEST:11.086 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:47:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 10:47:41.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-plvhm" to be "success or failure" Dec 23 10:47:41.751: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.046248ms Dec 23 10:47:43.868: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2072526s Dec 23 10:47:45.889: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227406475s Dec 23 10:47:48.227: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565690932s Dec 23 10:47:50.255: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593282429s Dec 23 10:47:52.269: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.608034114s STEP: Saw pod success Dec 23 10:47:52.269: INFO: Pod "downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:47:52.273: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 10:47:52.330: INFO: Waiting for pod downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005 to disappear Dec 23 10:47:52.336: INFO: Pod downwardapi-volume-a4e93909-2571-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:47:52.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-plvhm" for this suite. Dec 23 10:47:59.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:47:59.188: INFO: namespace: e2e-tests-downward-api-plvhm, resource: bindings, ignored listing per whitelist Dec 23 10:47:59.243: INFO: namespace e2e-tests-downward-api-plvhm deletion completed in 6.898666726s • [SLOW TEST:17.730 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:47:59.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Dec 23 10:47:59.505: INFO: Waiting up to 5m0s for pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005" in namespace "e2e-tests-containers-mctwn" to be "success or failure" Dec 23 10:47:59.511: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679656ms Dec 23 10:48:01.719: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214574132s Dec 23 10:48:03.739: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234528893s Dec 23 10:48:05.998: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493286286s Dec 23 10:48:08.428: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.923717536s Dec 23 10:48:10.446: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.9410592s STEP: Saw pod success Dec 23 10:48:10.446: INFO: Pod "client-containers-af8701a3-2571-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:48:10.453: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-af8701a3-2571-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 10:48:10.600: INFO: Waiting for pod client-containers-af8701a3-2571-11ea-a9d2-0242ac110005 to disappear Dec 23 10:48:10.623: INFO: Pod client-containers-af8701a3-2571-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:48:10.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mctwn" for this suite. Dec 23 10:48:16.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:48:16.808: INFO: namespace: e2e-tests-containers-mctwn, resource: bindings, ignored listing per whitelist Dec 23 10:48:16.884: INFO: namespace e2e-tests-containers-mctwn deletion completed in 6.244070921s • [SLOW TEST:17.641 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:48:16.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 23 10:48:30.188: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:48:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-fhpjq" for this suite. Dec 23 10:48:57.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:48:58.073: INFO: namespace: e2e-tests-replicaset-fhpjq, resource: bindings, ignored listing per whitelist Dec 23 10:48:58.090: INFO: namespace e2e-tests-replicaset-fhpjq deletion completed in 26.834686422s • [SLOW TEST:41.206 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:48:58.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 10:48:58.350: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Dec 23 10:48:58.367: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-62f4z/daemonsets","resourceVersion":"15777827"},"items":null} Dec 23 10:48:58.371: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-62f4z/pods","resourceVersion":"15777827"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:48:58.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-62f4z" for this suite. Dec 23 10:49:04.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:49:04.597: INFO: namespace: e2e-tests-daemonsets-62f4z, resource: bindings, ignored listing per whitelist Dec 23 10:49:04.678: INFO: namespace e2e-tests-daemonsets-62f4z deletion completed in 6.289037562s S [SKIPPING] [6.587 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 10:48:58.350: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:49:04.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 23 10:49:17.638: INFO: Successfully updated pod "annotationupdated69a37c8-2571-11ea-a9d2-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:49:19.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2pztl" for this suite. Dec 23 10:49:42.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:49:42.099: INFO: namespace: e2e-tests-projected-2pztl, resource: bindings, ignored listing per whitelist Dec 23 10:49:42.213: INFO: namespace e2e-tests-projected-2pztl deletion completed in 22.328208282s • [SLOW TEST:37.535 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:49:42.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 10:49:42.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-cbj6w" to be "success or failure" Dec 23 10:49:42.401: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.467863ms Dec 23 10:49:44.448: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084831471s Dec 23 10:49:46.466: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103118332s Dec 23 10:49:48.520: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156849426s Dec 23 10:49:50.566: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202796559s Dec 23 10:49:52.660: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296738987s STEP: Saw pod success Dec 23 10:49:52.660: INFO: Pod "downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:49:52.671: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 10:49:52.901: INFO: Waiting for pod downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005 to disappear Dec 23 10:49:52.928: INFO: Pod downwardapi-volume-ecd5d642-2571-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:49:52.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbj6w" for this suite. Dec 23 10:50:01.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:50:01.332: INFO: namespace: e2e-tests-projected-cbj6w, resource: bindings, ignored listing per whitelist Dec 23 10:50:01.434: INFO: namespace e2e-tests-projected-cbj6w deletion completed in 8.485868094s • [SLOW TEST:19.221 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:50:01.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 23 10:50:01.684: INFO: namespace e2e-tests-kubectl-zg7rm Dec 23 10:50:01.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zg7rm' Dec 23 10:50:02.180: INFO: stderr: "" Dec 23 10:50:02.181: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 23 10:50:03.205: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:03.206: INFO: Found 0 / 1 Dec 23 10:50:04.194: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:04.194: INFO: Found 0 / 1 Dec 23 10:50:05.200: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:05.200: INFO: Found 0 / 1 Dec 23 10:50:06.198: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:06.198: INFO: Found 0 / 1 Dec 23 10:50:07.198: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:07.198: INFO: Found 0 / 1 Dec 23 10:50:08.373: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:08.374: INFO: Found 0 / 1 Dec 23 10:50:09.196: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:09.196: INFO: Found 0 / 1 Dec 23 10:50:10.197: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:10.197: INFO: Found 0 / 1 Dec 23 10:50:11.203: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:11.203: INFO: Found 0 / 1 Dec 23 10:50:12.204: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:12.204: INFO: Found 1 / 1 Dec 23 10:50:12.204: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 23 10:50:12.211: INFO: Selector matched 1 pods for map[app:redis] Dec 23 10:50:12.211: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 23 10:50:12.211: INFO: wait on redis-master startup in e2e-tests-kubectl-zg7rm Dec 23 10:50:12.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vmqsx redis-master --namespace=e2e-tests-kubectl-zg7rm' Dec 23 10:50:12.410: INFO: stderr: "" Dec 23 10:50:12.410: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Dec 10:50:10.618 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 10:50:10.618 # Server started, Redis version 3.2.12\n1:M 23 Dec 10:50:10.619 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 10:50:10.619 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 23 10:50:12.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-zg7rm' Dec 23 10:50:12.750: INFO: stderr: "" Dec 23 10:50:12.750: INFO: stdout: "service/rm2 exposed\n" Dec 23 10:50:12.761: INFO: Service rm2 in namespace e2e-tests-kubectl-zg7rm found. STEP: exposing service Dec 23 10:50:14.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-zg7rm' Dec 23 10:50:15.127: INFO: stderr: "" Dec 23 10:50:15.128: INFO: stdout: "service/rm3 exposed\n" Dec 23 10:50:15.141: INFO: Service rm3 in namespace e2e-tests-kubectl-zg7rm found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:50:17.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zg7rm" for this suite. Dec 23 10:50:41.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:50:41.478: INFO: namespace: e2e-tests-kubectl-zg7rm, resource: bindings, ignored listing per whitelist Dec 23 10:50:41.550: INFO: namespace e2e-tests-kubectl-zg7rm deletion completed in 24.380973459s • [SLOW TEST:40.116 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:50:41.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-mqnpr/secret-test-10455dac-2572-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 10:50:41.815: INFO: Waiting up to 5m0s for pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-mqnpr" to be "success or failure" Dec 23 10:50:41.928: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 112.923277ms Dec 23 10:50:43.954: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138690564s Dec 23 10:50:45.971: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155657874s Dec 23 10:50:47.990: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175253415s Dec 23 10:50:50.081: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265836731s Dec 23 10:50:52.338: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.523053532s STEP: Saw pod success Dec 23 10:50:52.338: INFO: Pod "pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:50:52.388: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005 container env-test: STEP: delete the pod Dec 23 10:50:52.706: INFO: Waiting for pod pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005 to disappear Dec 23 10:50:52.717: INFO: Pod pod-configmaps-1047209c-2572-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:50:52.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mqnpr" for this suite. Dec 23 10:50:58.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:50:58.827: INFO: namespace: e2e-tests-secrets-mqnpr, resource: bindings, ignored listing per whitelist Dec 23 10:50:58.992: INFO: namespace e2e-tests-secrets-mqnpr deletion completed in 6.24552208s • [SLOW TEST:17.442 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:50:58.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:50:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lsdsj" for this suite. Dec 23 10:51:23.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:51:23.493: INFO: namespace: e2e-tests-pods-lsdsj, resource: bindings, ignored listing per whitelist Dec 23 10:51:23.571: INFO: namespace e2e-tests-pods-lsdsj deletion completed in 24.201623854s • [SLOW TEST:24.578 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:51:23.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-294eccb6-2572-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 10:51:23.913: INFO: Waiting up to 5m0s for pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-nw496" to be "success or failure" Dec 23 10:51:23.963: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.952458ms Dec 23 10:51:25.982: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068413239s Dec 23 10:51:28.020: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106388855s Dec 23 10:51:30.051: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137974027s Dec 23 10:51:32.072: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158421275s Dec 23 10:51:34.241: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327231488s STEP: Saw pod success Dec 23 10:51:34.241: INFO: Pod "pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:51:34.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 23 10:51:34.675: INFO: Waiting for pod pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005 to disappear Dec 23 10:51:34.756: INFO: Pod pod-secrets-295248c5-2572-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:51:34.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nw496" for this suite. Dec 23 10:51:40.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:51:40.960: INFO: namespace: e2e-tests-secrets-nw496, resource: bindings, ignored listing per whitelist Dec 23 10:51:41.187: INFO: namespace e2e-tests-secrets-nw496 deletion completed in 6.396285246s • [SLOW TEST:17.616 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:51:41.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Dec 23 10:51:41.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:41.719: INFO: stderr: "" Dec 23 10:51:41.719: INFO: stdout: "pod/pause created\n" Dec 23 10:51:41.719: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 23 10:51:41.720: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-d5w2m" to be "running and ready" Dec 23 10:51:41.870: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 150.538593ms Dec 23 10:51:43.887: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167120501s Dec 23 10:51:45.909: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189274627s Dec 23 10:51:48.163: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443725875s Dec 23 10:51:50.191: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471074732s Dec 23 10:51:52.210: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.490134746s Dec 23 10:51:52.210: INFO: Pod "pause" satisfied condition "running and ready" Dec 23 10:51:52.210: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Dec 23 10:51:52.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:52.532: INFO: stderr: "" Dec 23 10:51:52.532: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 23 10:51:52.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:52.682: INFO: stderr: "" Dec 23 10:51:52.682: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 23 10:51:52.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:52.828: INFO: stderr: "" Dec 23 10:51:52.829: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 23 10:51:52.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:52.972: INFO: stderr: "" Dec 23 10:51:52.972: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Dec 23 10:51:52.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:53.149: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 10:51:53.149: INFO: stdout: "pod \"pause\" force deleted\n" Dec 23 10:51:53.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-d5w2m' Dec 23 10:51:53.323: INFO: stderr: "No resources found.\n" Dec 23 10:51:53.323: INFO: stdout: "" Dec 23 10:51:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-d5w2m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 10:51:53.438: INFO: stderr: "" Dec 23 10:51:53.438: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:51:53.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d5w2m" for this suite. Dec 23 10:51:59.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:51:59.611: INFO: namespace: e2e-tests-kubectl-d5w2m, resource: bindings, ignored listing per whitelist Dec 23 10:51:59.619: INFO: namespace e2e-tests-kubectl-d5w2m deletion completed in 6.162016743s • [SLOW TEST:18.431 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:51:59.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 10:51:59.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-bvndd" to be "success or failure" Dec 23 10:51:59.884: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.841421ms Dec 23 10:52:02.117: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241100454s Dec 23 10:52:04.135: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258601092s Dec 23 10:52:06.164: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28813085s Dec 23 10:52:08.181: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30471724s Dec 23 10:52:10.230: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.354008194s Dec 23 10:52:12.404: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.527873334s STEP: Saw pod success Dec 23 10:52:12.404: INFO: Pod "downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:52:12.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 10:52:12.620: INFO: Waiting for pod downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005 to disappear Dec 23 10:52:12.645: INFO: Pod downwardapi-volume-3ece4b12-2572-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:52:12.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bvndd" for this suite. Dec 23 10:52:18.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:52:18.804: INFO: namespace: e2e-tests-downward-api-bvndd, resource: bindings, ignored listing per whitelist Dec 23 10:52:18.901: INFO: namespace e2e-tests-downward-api-bvndd deletion completed in 6.246429579s • [SLOW TEST:19.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:52:18.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xjlb2 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xjlb2 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xjlb2 Dec 23 10:52:19.227: INFO: Found 0 stateful pods, waiting for 1 Dec 23 10:52:29.252: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 23 10:52:29.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 10:52:30.119: INFO: stderr: "" Dec 23 10:52:30.119: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 10:52:30.119: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 10:52:30.143: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 23 10:52:40.167: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 10:52:40.168: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 10:52:40.311: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:52:40.312: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:52:40.312: INFO: Dec 23 10:52:40.312: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 23 10:52:42.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.885364494s Dec 23 10:52:43.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959264046s Dec 23 10:52:44.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.536159538s Dec 23 10:52:45.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.312193515s Dec 23 10:52:47.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.271051518s Dec 23 10:52:48.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.13261508s Dec 23 10:52:49.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 832.752115ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xjlb2 Dec 23 10:52:51.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:52:54.292: INFO: stderr: "" Dec 23 10:52:54.293: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 10:52:54.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 10:52:54.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:52:55.082: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 23 10:52:55.083: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 10:52:55.083: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 10:52:55.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:52:55.507: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 23 10:52:55.508: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 10:52:55.508: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 10:52:55.525: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 10:52:55.525: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false Dec 23 10:53:05.547: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 10:53:05.547: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 10:53:05.547: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 23 10:53:05.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 10:53:06.097: INFO: stderr: "" Dec 23 10:53:06.098: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 10:53:06.098: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 10:53:06.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 10:53:06.642: INFO: stderr: "" Dec 23 10:53:06.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 10:53:06.642: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 10:53:06.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 10:53:07.232: INFO: stderr: "" Dec 23 10:53:07.232: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 10:53:07.232: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 10:53:07.232: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 10:53:07.242: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 23 10:53:17.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 10:53:17.267: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 23 10:53:17.267: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 23 10:53:17.303: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:17.303: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:17.304: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:17.304: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:17.304: INFO: Dec 23 10:53:17.304: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:18.321: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:18.322: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:18.322: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:18.322: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:18.322: INFO: Dec 23 10:53:18.322: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:19.755: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:19.756: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:19.756: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:19.756: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:19.756: INFO: Dec 23 10:53:19.756: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:20.772: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:20.772: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:20.772: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:20.772: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:20.772: INFO: Dec 23 10:53:20.772: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:21.815: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:21.816: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:21.816: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:21.816: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:21.816: INFO: Dec 23 10:53:21.816: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:23.631: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:23.632: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:23.632: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:23.632: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:23.632: INFO: Dec 23 10:53:23.632: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:24.787: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:24.788: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:24.788: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:24.788: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:24.788: INFO: Dec 23 10:53:24.788: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 10:53:25.919: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:25.919: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:25.920: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:25.920: INFO: Dec 23 10:53:25.920: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 23 10:53:26.942: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 10:53:26.943: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:19 +0000 UTC }] Dec 23 10:53:26.943: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 10:52:40 +0000 UTC }] Dec 23 10:53:26.943: INFO: Dec 23 10:53:26.943: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xjlb2 Dec 23 10:53:27.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:53:28.225: INFO: rc: 1 Dec 23 10:53:28.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000af9ce0 exit status 1 true [0xc001616710 0xc001616728 0xc001616740] [0xc001616710 0xc001616728 0xc001616740] [0xc001616720 0xc001616738] [0x935700 0x935700] 0xc00223b860 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 23 10:53:38.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:53:38.410: INFO: rc: 1 Dec 23 10:53:38.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ff0e0 exit status 1 true [0xc0011bf248 0xc0011bf288 0xc0011bf308] [0xc0011bf248 0xc0011bf288 0xc0011bf308] [0xc0011bf280 0xc0011bf300] [0x935700 0x935700] 0xc000b31aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:53:48.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:53:48.605: INFO: rc: 1 Dec 23 10:53:48.606: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ff230 exit status 1 true [0xc0011bf340 0xc0011bf368 0xc0011bf3a0] [0xc0011bf340 0xc0011bf368 0xc0011bf3a0] [0xc0011bf360 0xc0011bf390] [0x935700 0x935700] 0xc000b31d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:53:58.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:53:58.768: INFO: rc: 1 Dec 23 10:53:58.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ff380 exit status 1 true [0xc0011bf3a8 0xc0011bf3d8 0xc0011bf3f8] [0xc0011bf3a8 0xc0011bf3d8 0xc0011bf3f8] [0xc0011bf3d0 0xc0011bf3f0] [0x935700 0x935700] 0xc001dfa060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:08.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:08.886: INFO: rc: 1 Dec 23 10:54:08.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000af9e30 exit status 1 true [0xc001616748 0xc001616760 0xc001616778] [0xc001616748 0xc001616760 0xc001616778] [0xc001616758 0xc001616770] [0x935700 0x935700] 0xc00223bb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:18.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:19.071: INFO: rc: 1 Dec 23 10:54:19.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016e8000 exit status 1 true [0xc00025ec20 0xc00025ed78 0xc00025ee98] [0xc00025ec20 0xc00025ed78 0xc00025ee98] [0xc00025ed70 0xc00025ede0] [0x935700 0x935700] 0xc00107c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:29.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:29.205: INFO: rc: 1 Dec 23 10:54:29.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ff0120 exit status 1 true [0xc001448000 0xc001448030 0xc001448048] [0xc001448000 0xc001448030 0xc001448048] [0xc001448028 0xc001448040] [0x935700 0x935700] 0xc001f5c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:39.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:39.440: INFO: rc: 1 Dec 23 10:54:39.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e0120 exit status 1 true [0xc00068e048 0xc00068e1d8 0xc00068e3c0] [0xc00068e048 0xc00068e1d8 0xc00068e3c0] [0xc00068e0f0 0xc00068e358] [0x935700 0x935700] 0xc001f1a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:49.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:49.592: INFO: rc: 1 Dec 23 10:54:49.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e0270 exit status 1 true [0xc00068e438 0xc00068e550 0xc00068e658] [0xc00068e438 0xc00068e550 0xc00068e658] [0xc00068e540 0xc00068e600] [0x935700 0x935700] 0xc001f1a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:54:59.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:54:59.764: INFO: rc: 1 Dec 23 10:54:59.764: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e03c0 exit status 1 true [0xc00068e708 0xc00068e820 0xc00068e998] [0xc00068e708 0xc00068e820 0xc00068e998] [0xc00068e7f8 0xc00068e920] [0x935700 0x935700] 0xc001f1a720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:55:09.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:55:09.925: INFO: rc: 1 Dec 23 10:55:09.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e04e0 exit status 1 true [0xc00068e9f8 0xc00068ea98 0xc00068eb48] [0xc00068e9f8 0xc00068ea98 0xc00068eb48] [0xc00068ea60 0xc00068eb08] [0x935700 0x935700] 0xc001f1a9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:55:19.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:55:20.089: INFO: rc: 1 Dec 23 10:55:20.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010a2120 exit status 1 true [0xc00025eed8 0xc00025f0f0 0xc00025f2b8] [0xc00025eed8 0xc00025f0f0 0xc00025f2b8] [0xc00025f0c0 0xc00025f270] [0x935700 0x935700] 0xc000b306c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:55:30.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:55:30.273: INFO: rc: 1 Dec 23 10:55:30.273: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010a2240 exit status 1 true [0xc00025f2e0 0xc00025f408 0xc00025f4e0] [0xc00025f2e0 0xc00025f408 0xc00025f4e0] [0xc00025f3b0 0xc00025f4b0] [0x935700 0x935700] 0xc000b30960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:55:40.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:55:40.471: INFO: rc: 1 Dec 23 10:55:40.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001600120 exit status 1 true [0xc001616000 0xc001616018 0xc001616030] [0xc001616000 0xc001616018 0xc001616030] [0xc001616010 0xc001616028] [0x935700 0x935700] 0xc000a3c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:55:50.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:55:50.631: INFO: rc: 1 Dec 23 10:55:50.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001600240 exit status 1 true [0xc001616038 0xc001616058 0xc001616070] [0xc001616038 0xc001616058 0xc001616070] [0xc001616048 0xc001616068] [0x935700 0x935700] 0xc000a3cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:00.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:00.791: INFO: rc: 1 Dec 23 10:56:00.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e06f0 exit status 1 true [0xc00068eb58 0xc00068eba0 0xc00068ebc0] [0xc00068eb58 0xc00068eba0 0xc00068ebc0] [0xc00068eb98 0xc00068ebb8] [0x935700 0x935700] 0xc001f1ac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:10.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:11.316: INFO: rc: 1 Dec 23 10:56:11.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ff02a0 exit status 1 true [0xc001448050 0xc001448090 0xc0014480e0] [0xc001448050 0xc001448090 0xc0014480e0] [0xc001448088 0xc0014480c8] [0x935700 0x935700] 0xc001f5c5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:21.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:21.474: INFO: rc: 1 Dec 23 10:56:21.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010a23c0 exit status 1 true [0xc00025f510 0xc00025f600 0xc00025f6e0] [0xc00025f510 0xc00025f600 0xc00025f6e0] [0xc00025f5d8 0xc00025f6c0] [0x935700 0x935700] 0xc000b30c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:31.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:31.634: INFO: rc: 1 Dec 23 10:56:31.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001600150 exit status 1 true [0xc00016e000 0xc001448028 0xc001448040] [0xc00016e000 0xc001448028 0xc001448040] [0xc001448020 0xc001448038] [0x935700 0x935700] 0xc001f5c180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:41.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:41.860: INFO: rc: 1 Dec 23 10:56:41.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016002a0 exit status 1 true [0xc001448048 0xc001448088 0xc0014480c8] [0xc001448048 0xc001448088 0xc0014480c8] [0xc001448068 0xc0014480b0] [0x935700 0x935700] 0xc001f5c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:56:51.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:56:52.017: INFO: rc: 1 Dec 23 10:56:52.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016003f0 exit status 1 true [0xc0014480e0 0xc001448120 0xc001448150] [0xc0014480e0 0xc001448120 0xc001448150] [0xc001448108 0xc001448140] [0x935700 0x935700] 0xc001f5cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:02.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:02.191: INFO: rc: 1 Dec 23 10:57:02.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001600510 exit status 1 true [0xc001448160 0xc0014481b8 0xc001448208] [0xc001448160 0xc0014481b8 0xc001448208] [0xc001448190 0xc0014481e0] [0x935700 0x935700] 0xc001f5d260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:12.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:12.368: INFO: rc: 1 Dec 23 10:57:12.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ff0180 exit status 1 true [0xc001616000 0xc001616018 0xc001616030] [0xc001616000 0xc001616018 0xc001616030] [0xc001616010 0xc001616028] [0x935700 0x935700] 0xc000a3c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:22.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:22.559: INFO: rc: 1 Dec 23 10:57:22.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001600660 exit status 1 true [0xc001448218 0xc001448270 0xc0014482a8] [0xc001448218 0xc001448270 0xc0014482a8] [0xc001448240 0xc001448298] [0x935700 0x935700] 0xc001f5d500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:32.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:32.708: INFO: rc: 1 Dec 23 10:57:32.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e00f0 exit status 1 true [0xc00025ec20 0xc00025ed78 0xc00025ee98] [0xc00025ec20 0xc00025ed78 0xc00025ee98] [0xc00025ed70 0xc00025ede0] [0x935700 0x935700] 0xc000b30720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:42.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:42.804: INFO: rc: 1 Dec 23 10:57:42.805: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010a21b0 exit status 1 true [0xc00068e048 0xc00068e1d8 0xc00068e3c0] [0xc00068e048 0xc00068e1d8 0xc00068e3c0] [0xc00068e0f0 0xc00068e358] [0x935700 0x935700] 0xc001f1a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:57:52.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:57:52.916: INFO: rc: 1 Dec 23 10:57:52.917: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e02a0 exit status 1 true [0xc00025eed8 0xc00025f0f0 0xc00025f2b8] [0xc00025eed8 0xc00025f0f0 0xc00025f2b8] [0xc00025f0c0 0xc00025f270] [0x935700 0x935700] 0xc000b309c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:58:02.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:58:03.073: INFO: rc: 1 Dec 23 10:58:03.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e0420 exit status 1 true [0xc00025f2e0 0xc00025f408 0xc00025f4e0] [0xc00025f2e0 0xc00025f408 0xc00025f4e0] [0xc00025f3b0 0xc00025f4b0] [0x935700 0x935700] 0xc000b30f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:58:13.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:58:13.267: INFO: rc: 1 Dec 23 10:58:13.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e0570 exit status 1 true [0xc00025f6f0 0xc00025f730 0xc00025f790] [0xc00025f6f0 0xc00025f730 0xc00025f790] [0xc00025f718 0xc00025f780] [0x935700 0x935700] 0xc000b311a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:58:23.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:58:23.457: INFO: rc: 1 Dec 23 10:58:23.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019e0150 exit status 1 true [0xc001616000 0xc001616018 0xc001616030] [0xc001616000 0xc001616018 0xc001616030] [0xc001616010 0xc001616028] [0x935700 0x935700] 0xc001f5c180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 23 10:58:33.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjlb2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 10:58:33.610: INFO: rc: 1 Dec 23 10:58:33.611: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 23 10:58:33.611: INFO: Scaling statefulset ss to 0 Dec 23 10:58:33.749: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 23 10:58:33.757: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xjlb2 Dec 23 10:58:33.764: INFO: Scaling statefulset ss to 0 Dec 23 10:58:33.785: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 10:58:33.789: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:58:33.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xjlb2" for this suite. Dec 23 10:58:43.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:58:44.116: INFO: namespace: e2e-tests-statefulset-xjlb2, resource: bindings, ignored listing per whitelist Dec 23 10:58:44.150: INFO: namespace e2e-tests-statefulset-xjlb2 deletion completed in 10.247359243s • [SLOW TEST:385.248 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:58:44.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 23 10:58:55.305: INFO: Successfully updated pod "labelsupdate2fe327fd-2573-11ea-a9d2-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:58:57.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rkv6l" for this suite. Dec 23 10:59:21.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:59:21.982: INFO: namespace: e2e-tests-projected-rkv6l, resource: bindings, ignored listing per whitelist Dec 23 10:59:21.985: INFO: namespace e2e-tests-projected-rkv6l deletion completed in 24.396356805s • [SLOW TEST:37.835 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:59:21.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 23 10:59:22.306: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 23 10:59:27.333: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:59:28.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-wqtz9" for this suite. Dec 23 10:59:41.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:59:41.916: INFO: namespace: e2e-tests-replication-controller-wqtz9, resource: bindings, ignored listing per whitelist Dec 23 10:59:41.990: INFO: namespace e2e-tests-replication-controller-wqtz9 deletion completed in 12.709093882s • [SLOW TEST:20.005 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:59:41.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-525ed6a7-2573-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 10:59:42.184: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-96cx7" to be "success or failure" Dec 23 10:59:42.197: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.899807ms Dec 23 10:59:44.213: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028787114s Dec 23 10:59:46.235: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050510225s Dec 23 10:59:48.254: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069141409s Dec 23 10:59:50.270: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085006555s Dec 23 10:59:52.554: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.369064764s STEP: Saw pod success Dec 23 10:59:52.554: INFO: Pod "pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 10:59:52.576: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 23 10:59:52.720: INFO: Waiting for pod pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005 to disappear Dec 23 10:59:52.869: INFO: Pod pod-projected-secrets-525fe78f-2573-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 10:59:52.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-96cx7" for this suite. Dec 23 10:59:58.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 10:59:58.975: INFO: namespace: e2e-tests-projected-96cx7, resource: bindings, ignored listing per whitelist Dec 23 10:59:59.154: INFO: namespace e2e-tests-projected-96cx7 deletion completed in 6.261056373s • [SLOW TEST:17.164 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 10:59:59.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5ca43d6c-2573-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5ca43d6c-2573-11ea-a9d2-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:00:09.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rkqjh" for this suite. Dec 23 11:00:33.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:00:33.812: INFO: namespace: e2e-tests-projected-rkqjh, resource: bindings, ignored listing per whitelist Dec 23 11:00:33.849: INFO: namespace e2e-tests-projected-rkqjh deletion completed in 24.307520872s • [SLOW TEST:34.695 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:00:33.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-715fd17e-2573-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:00:44.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8gkt2" for this suite. Dec 23 11:01:08.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:01:08.480: INFO: namespace: e2e-tests-configmap-8gkt2, resource: bindings, ignored listing per whitelist Dec 23 11:01:08.581: INFO: namespace e2e-tests-configmap-8gkt2 deletion completed in 24.278879893s • [SLOW TEST:34.732 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:01:08.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 23 11:01:09.370: INFO: created pod pod-service-account-defaultsa Dec 23 11:01:09.370: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 23 11:01:09.385: INFO: created pod pod-service-account-mountsa Dec 23 11:01:09.385: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 23 11:01:09.487: INFO: created pod pod-service-account-nomountsa Dec 23 11:01:09.487: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 23 11:01:09.509: INFO: created pod pod-service-account-defaultsa-mountspec Dec 23 11:01:09.509: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 23 11:01:09.563: INFO: created pod pod-service-account-mountsa-mountspec Dec 23 11:01:09.564: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 23 11:01:09.620: INFO: created pod pod-service-account-nomountsa-mountspec Dec 23 11:01:09.620: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 23 11:01:09.649: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 23 11:01:09.649: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 23 11:01:09.684: INFO: created pod pod-service-account-mountsa-nomountspec Dec 23 11:01:09.684: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 23 11:01:09.716: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 23 11:01:09.717: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:01:09.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7fh8p" for this suite. Dec 23 11:01:38.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:01:38.149: INFO: namespace: e2e-tests-svcaccounts-7fh8p, resource: bindings, ignored listing per whitelist Dec 23 11:01:38.173: INFO: namespace e2e-tests-svcaccounts-7fh8p deletion completed in 28.368722715s • [SLOW TEST:29.591 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:01:38.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-97b4470e-2573-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-97b4470e-2573-11ea-a9d2-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:01:50.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nbsj9" for this suite. Dec 23 11:02:14.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:02:14.916: INFO: namespace: e2e-tests-configmap-nbsj9, resource: bindings, ignored listing per whitelist Dec 23 11:02:14.992: INFO: namespace e2e-tests-configmap-nbsj9 deletion completed in 24.132781936s • [SLOW TEST:36.819 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:02:14.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-w56z5 Dec 23 11:02:25.585: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-w56z5 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 11:02:25.594: INFO: Initial restart count of pod liveness-exec is 0 Dec 23 11:03:14.195: INFO: Restart count of pod e2e-tests-container-probe-w56z5/liveness-exec is now 1 (48.601409262s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:03:14.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w56z5" for this suite. Dec 23 11:03:20.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:03:20.729: INFO: namespace: e2e-tests-container-probe-w56z5, resource: bindings, ignored listing per whitelist Dec 23 11:03:20.776: INFO: namespace e2e-tests-container-probe-w56z5 deletion completed in 6.447353217s • [SLOW TEST:65.783 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:03:20.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d4c734d0-2573-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:03:20.986: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-pzsxr" to be "success or failure" Dec 23 11:03:21.011: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.98467ms Dec 23 11:03:23.206: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219806144s Dec 23 11:03:25.226: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240508657s Dec 23 11:03:27.257: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271427875s Dec 23 11:03:30.149: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.162789915s Dec 23 11:03:32.179: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.193550047s STEP: Saw pod success Dec 23 11:03:32.180: INFO: Pod "pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:03:32.188: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 23 11:03:32.890: INFO: Waiting for pod pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005 to disappear Dec 23 11:03:33.451: INFO: Pod pod-configmaps-d4c8a63d-2573-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:03:33.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pzsxr" for this suite. Dec 23 11:03:39.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:03:39.784: INFO: namespace: e2e-tests-configmap-pzsxr, resource: bindings, ignored listing per whitelist Dec 23 11:03:39.881: INFO: namespace e2e-tests-configmap-pzsxr deletion completed in 6.407503713s • [SLOW TEST:19.105 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:03:39.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 23 11:03:40.174: INFO: Waiting up to 5m0s for pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-sqnvj" to be "success or failure" Dec 23 11:03:40.279: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.005571ms Dec 23 11:03:42.294: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119527477s Dec 23 11:03:44.306: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131765887s Dec 23 11:03:46.325: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150114941s Dec 23 11:03:48.351: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176054319s Dec 23 11:03:50.370: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195720955s STEP: Saw pod success Dec 23 11:03:50.371: INFO: Pod "pod-e0385a98-2573-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:03:50.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e0385a98-2573-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:03:50.442: INFO: Waiting for pod pod-e0385a98-2573-11ea-a9d2-0242ac110005 to disappear Dec 23 11:03:50.458: INFO: Pod pod-e0385a98-2573-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:03:50.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sqnvj" for this suite. Dec 23 11:03:56.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:03:56.594: INFO: namespace: e2e-tests-emptydir-sqnvj, resource: bindings, ignored listing per whitelist Dec 23 11:03:56.704: INFO: namespace e2e-tests-emptydir-sqnvj deletion completed in 6.171671467s • [SLOW TEST:16.822 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:03:56.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 23 11:03:56.827: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:04:12.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-dbvbb" for this suite. Dec 23 11:04:18.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:04:18.397: INFO: namespace: e2e-tests-init-container-dbvbb, resource: bindings, ignored listing per whitelist Dec 23 11:04:18.514: INFO: namespace e2e-tests-init-container-dbvbb deletion completed in 6.34144416s • [SLOW TEST:21.810 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:04:18.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 11:04:27.615: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f742080f-2573-11ea-a9d2-0242ac110005" Dec 23 11:04:27.615: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f742080f-2573-11ea-a9d2-0242ac110005" in namespace "e2e-tests-pods-jqq6k" to be "terminated due to deadline exceeded" Dec 23 11:04:27.622: INFO: Pod "pod-update-activedeadlineseconds-f742080f-2573-11ea-a9d2-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 6.621284ms Dec 23 11:04:29.772: INFO: Pod "pod-update-activedeadlineseconds-f742080f-2573-11ea-a9d2-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.156866644s Dec 23 11:04:29.773: INFO: Pod "pod-update-activedeadlineseconds-f742080f-2573-11ea-a9d2-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:04:29.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jqq6k" for this suite. Dec 23 11:04:36.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:04:36.152: INFO: namespace: e2e-tests-pods-jqq6k, resource: bindings, ignored listing per whitelist Dec 23 11:04:36.305: INFO: namespace e2e-tests-pods-jqq6k deletion completed in 6.325365827s • [SLOW TEST:17.791 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:04:36.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 23 11:04:45.162: INFO: Successfully updated pod "labelsupdate01d8f6d9-2574-11ea-a9d2-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:04:47.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jvvws" for this suite. Dec 23 11:05:11.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:05:11.572: INFO: namespace: e2e-tests-downward-api-jvvws, resource: bindings, ignored listing per whitelist Dec 23 11:05:11.617: INFO: namespace e2e-tests-downward-api-jvvws deletion completed in 24.210525588s • [SLOW TEST:35.312 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:05:11.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-zkncd STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-zkncd STEP: Deleting pre-stop pod Dec 23 11:05:36.971: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:05:37.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-zkncd" for this suite. Dec 23 11:06:17.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:06:17.329: INFO: namespace: e2e-tests-prestop-zkncd, resource: bindings, ignored listing per whitelist Dec 23 11:06:17.353: INFO: namespace e2e-tests-prestop-zkncd deletion completed in 40.262020962s • [SLOW TEST:65.735 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:06:17.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:06:17.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-z67sd" to be "success or failure" Dec 23 11:06:17.630: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866788ms Dec 23 11:06:19.685: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06557526s Dec 23 11:06:21.707: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088170041s Dec 23 11:06:23.722: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103062623s Dec 23 11:06:25.749: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129745165s Dec 23 11:06:27.776: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157314003s STEP: Saw pod success Dec 23 11:06:27.776: INFO: Pod "downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:06:28.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:06:28.454: INFO: Waiting for pod downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005 to disappear Dec 23 11:06:28.479: INFO: Pod downwardapi-volume-3e115b2a-2574-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:06:28.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z67sd" for this suite. Dec 23 11:06:36.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:06:36.636: INFO: namespace: e2e-tests-projected-z67sd, resource: bindings, ignored listing per whitelist Dec 23 11:06:36.823: INFO: namespace e2e-tests-projected-z67sd deletion completed in 8.329868312s • [SLOW TEST:19.470 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:06:36.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-49adbb2e-2574-11ea-a9d2-0242ac110005 STEP: Creating secret with name s-test-opt-upd-49adbcbb-2574-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-49adbb2e-2574-11ea-a9d2-0242ac110005 STEP: Updating secret s-test-opt-upd-49adbcbb-2574-11ea-a9d2-0242ac110005 STEP: Creating secret with name s-test-opt-create-49adbcf6-2574-11ea-a9d2-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:07:59.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6w4bb" for this suite. Dec 23 11:08:23.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:08:23.500: INFO: namespace: e2e-tests-secrets-6w4bb, resource: bindings, ignored listing per whitelist Dec 23 11:08:23.529: INFO: namespace e2e-tests-secrets-6w4bb deletion completed in 24.227887206s • [SLOW TEST:106.706 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:08:23.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-893f9506-2574-11ea-a9d2-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-893f96e3-2574-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-893f9506-2574-11ea-a9d2-0242ac110005 STEP: Updating configmap cm-test-opt-upd-893f96e3-2574-11ea-a9d2-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-893f9717-2574-11ea-a9d2-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:08:42.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6pss6" for this suite. Dec 23 11:09:04.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:09:04.448: INFO: namespace: e2e-tests-projected-6pss6, resource: bindings, ignored listing per whitelist Dec 23 11:09:04.592: INFO: namespace e2e-tests-projected-6pss6 deletion completed in 22.356805332s • [SLOW TEST:41.063 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:09:04.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:09:04.801: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 23 11:09:10.272: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 23 11:09:14.302: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 23 11:09:16.323: INFO: Creating deployment "test-rollover-deployment" Dec 23 11:09:16.382: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 23 11:09:18.597: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 23 11:09:19.001: INFO: Ensure that both replica sets have 1 created replica Dec 23 11:09:19.013: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 23 11:09:19.023: INFO: Updating deployment test-rollover-deployment Dec 23 11:09:19.023: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 23 11:09:21.427: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 23 11:09:21.447: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 23 11:09:21.454: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:21.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696159, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:23.500: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:23.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696159, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:25.679: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:25.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696159, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:27.474: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:27.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696159, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:29.479: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:29.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696168, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:31.496: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:31.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696168, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:33.479: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:33.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696168, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:35.477: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:35.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696168, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:37.474: INFO: all replica sets need to contain the pod-template-hash label Dec 23 11:09:37.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696168, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:41.010: INFO: Dec 23 11:09:41.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696178, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712696156, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:09:41.486: INFO: Dec 23 11:09:41.486: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 23 11:09:41.498: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2xptl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2xptl/deployments/test-rollover-deployment,UID:a899ee0e-2574-11ea-a994-fa163e34d433,ResourceVersion:15780351,Generation:2,CreationTimestamp:2019-12-23 11:09:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-23 11:09:16 +0000 UTC 2019-12-23 11:09:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-23 11:09:39 +0000 UTC 2019-12-23 11:09:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 23 11:09:41.503: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2xptl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2xptl/replicasets/test-rollover-deployment-5b8479fdb6,UID:aa360b69-2574-11ea-a994-fa163e34d433,ResourceVersion:15780341,Generation:2,CreationTimestamp:2019-12-23 11:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a899ee0e-2574-11ea-a994-fa163e34d433 0xc0021f4fd7 0xc0021f4fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 23 11:09:41.503: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 23 11:09:41.503: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2xptl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2xptl/replicasets/test-rollover-controller,UID:a1b8cf80-2574-11ea-a994-fa163e34d433,ResourceVersion:15780350,Generation:2,CreationTimestamp:2019-12-23 11:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a899ee0e-2574-11ea-a994-fa163e34d433 0xc0021f4daf 0xc0021f4dc0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 23 11:09:41.503: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2xptl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2xptl/replicasets/test-rollover-deployment-58494b7559,UID:a8bf233b-2574-11ea-a994-fa163e34d433,ResourceVersion:15780309,Generation:2,CreationTimestamp:2019-12-23 11:09:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a899ee0e-2574-11ea-a994-fa163e34d433 0xc0021f4f07 0xc0021f4f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 23 11:09:41.510: INFO: Pod "test-rollover-deployment-5b8479fdb6-cq4rm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-cq4rm,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2xptl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2xptl/pods/test-rollover-deployment-5b8479fdb6-cq4rm,UID:aa59e250-2574-11ea-a994-fa163e34d433,ResourceVersion:15780326,Generation:0,CreationTimestamp:2019-12-23 11:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 aa360b69-2574-11ea-a994-fa163e34d433 0xc0018271f7 0xc0018271f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67m45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67m45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-67m45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001827260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001827280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:09:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:09:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:09:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:09:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-23 11:09:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-23 11:09:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6592d67fffb50d83c6514ac787a40c14f693765a0a384953f5bd21869c210b3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:09:41.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2xptl" for this suite. Dec 23 11:09:49.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:09:49.680: INFO: namespace: e2e-tests-deployment-2xptl, resource: bindings, ignored listing per whitelist Dec 23 11:09:49.764: INFO: namespace e2e-tests-deployment-2xptl deletion completed in 8.247220129s • [SLOW TEST:45.172 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:09:49.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-gs8w STEP: Creating a pod to test atomic-volume-subpath Dec 23 11:09:50.873: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gs8w" in namespace "e2e-tests-subpath-pqdrw" to be "success or failure" Dec 23 11:09:50.972: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 99.122875ms Dec 23 11:09:52.992: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118274484s Dec 23 11:09:55.010: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136321804s Dec 23 11:09:57.188: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314396901s Dec 23 11:09:59.283: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409709186s Dec 23 11:10:01.312: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.438587153s Dec 23 11:10:03.329: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.456210679s Dec 23 11:10:05.349: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.475298761s Dec 23 11:10:07.405: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 16.531281063s Dec 23 11:10:09.490: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 18.616254343s Dec 23 11:10:11.506: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 20.633120211s Dec 23 11:10:13.525: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 22.65198023s Dec 23 11:10:15.543: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 24.670126069s Dec 23 11:10:17.561: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 26.687336677s Dec 23 11:10:19.578: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 28.704617576s Dec 23 11:10:21.591: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 30.717294541s Dec 23 11:10:23.621: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Running", Reason="", readiness=false. Elapsed: 32.747243119s Dec 23 11:10:25.759: INFO: Pod "pod-subpath-test-secret-gs8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.885472652s STEP: Saw pod success Dec 23 11:10:25.760: INFO: Pod "pod-subpath-test-secret-gs8w" satisfied condition "success or failure" Dec 23 11:10:26.216: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gs8w container test-container-subpath-secret-gs8w: STEP: delete the pod Dec 23 11:10:26.440: INFO: Waiting for pod pod-subpath-test-secret-gs8w to disappear Dec 23 11:10:26.465: INFO: Pod pod-subpath-test-secret-gs8w no longer exists STEP: Deleting pod pod-subpath-test-secret-gs8w Dec 23 11:10:26.465: INFO: Deleting pod "pod-subpath-test-secret-gs8w" in namespace "e2e-tests-subpath-pqdrw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:10:26.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-pqdrw" for this suite. Dec 23 11:10:32.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:10:32.598: INFO: namespace: e2e-tests-subpath-pqdrw, resource: bindings, ignored listing per whitelist Dec 23 11:10:32.698: INFO: namespace e2e-tests-subpath-pqdrw deletion completed in 6.215148995s • [SLOW TEST:42.934 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:10:32.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:10:32.885: INFO: Creating deployment "nginx-deployment" Dec 23 11:10:32.903: INFO: Waiting for observed generation 1 Dec 23 11:10:35.768: INFO: Waiting for all required pods to come up Dec 23 11:10:35.794: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 23 11:11:14.572: INFO: Waiting for deployment "nginx-deployment" to complete Dec 23 11:11:14.601: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 23 11:11:14.622: INFO: Updating deployment nginx-deployment Dec 23 11:11:14.622: INFO: Waiting for observed generation 2 Dec 23 11:11:17.808: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 23 11:11:18.698: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 23 11:11:18.922: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 23 11:11:19.221: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 23 11:11:19.222: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 23 11:11:19.229: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 23 11:11:19.239: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 23 11:11:19.239: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 23 11:11:20.799: INFO: Updating deployment nginx-deployment Dec 23 11:11:20.799: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 23 11:11:21.206: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 23 11:11:21.441: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 23 11:11:23.676: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-q2js5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2js5/deployments/nginx-deployment,UID:d63c49a2-2574-11ea-a994-fa163e34d433,ResourceVersion:15780719,Generation:3,CreationTimestamp:2019-12-23 11:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-23 11:11:17 +0000 UTC 2019-12-23 11:10:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-23 11:11:21 +0000 UTC 2019-12-23 11:11:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 23 11:11:23.857: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-q2js5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2js5/replicasets/nginx-deployment-5c98f8fb5,UID:ef1d61e4-2574-11ea-a994-fa163e34d433,ResourceVersion:15780708,Generation:3,CreationTimestamp:2019-12-23 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d63c49a2-2574-11ea-a994-fa163e34d433 0xc0020ae2a7 0xc0020ae2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 23 11:11:23.858: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 23 11:11:23.858: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-q2js5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2js5/replicasets/nginx-deployment-85ddf47c5d,UID:d64174c4-2574-11ea-a994-fa163e34d433,ResourceVersion:15780757,Generation:3,CreationTimestamp:2019-12-23 11:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d63c49a2-2574-11ea-a994-fa163e34d433 0xc0020ae3d7 0xc0020ae3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 23 11:11:24.285: INFO: Pod "nginx-deployment-5c98f8fb5-47sdp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-47sdp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-47sdp,UID:f40aabd1-2574-11ea-a994-fa163e34d433,ResourceVersion:15780752,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e34377 0xc001e34378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e343e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e34400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.287: INFO: Pod "nginx-deployment-5c98f8fb5-69cj5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-69cj5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-69cj5,UID:ef7e1cb7-2574-11ea-a994-fa163e34d433,ResourceVersion:15780698,Generation:0,CreationTimestamp:2019-12-23 11:11:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e34477 0xc001e34478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e344e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e34500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.287: INFO: Pod "nginx-deployment-5c98f8fb5-8r4dk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8r4dk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-8r4dk,UID:f2f80ec8-2574-11ea-a994-fa163e34d433,ResourceVersion:15780727,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e347d7 0xc001e347d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e34840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e34860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.287: INFO: Pod "nginx-deployment-5c98f8fb5-bm8bw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bm8bw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-bm8bw,UID:f334fe0a-2574-11ea-a994-fa163e34d433,ResourceVersion:15780740,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e348d7 0xc001e348d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e349c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e349e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.288: INFO: Pod "nginx-deployment-5c98f8fb5-gqcfp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gqcfp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-gqcfp,UID:f32f2a97-2574-11ea-a994-fa163e34d433,ResourceVersion:15780733,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e34a57 0xc001e34a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e34ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e34db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.288: INFO: Pod "nginx-deployment-5c98f8fb5-hm9hg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hm9hg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-hm9hg,UID:ef42b216-2574-11ea-a994-fa163e34d433,ResourceVersion:15780683,Generation:0,CreationTimestamp:2019-12-23 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e34e37 0xc001e34e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e34ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e34ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:14 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.289: INFO: Pod "nginx-deployment-5c98f8fb5-ksfx6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ksfx6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-ksfx6,UID:f40b66d5-2574-11ea-a994-fa163e34d433,ResourceVersion:15780763,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e352a7 0xc001e352a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e35320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.289: INFO: Pod "nginx-deployment-5c98f8fb5-m866c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m866c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-m866c,UID:f4824f6e-2574-11ea-a994-fa163e34d433,ResourceVersion:15780769,Generation:0,CreationTimestamp:2019-12-23 11:11:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e353b7 0xc001e353b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e354b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e354d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.289: INFO: Pod "nginx-deployment-5c98f8fb5-q4bb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q4bb2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-q4bb2,UID:ef4cc2cd-2574-11ea-a994-fa163e34d433,ResourceVersion:15780696,Generation:0,CreationTimestamp:2019-12-23 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e35547 0xc001e35548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e355b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.289: INFO: Pod "nginx-deployment-5c98f8fb5-qldf4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qldf4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-qldf4,UID:f40b4758-2574-11ea-a994-fa163e34d433,ResourceVersion:15780753,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e35877 0xc001e35878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e358e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.289: INFO: Pod "nginx-deployment-5c98f8fb5-rcmp2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rcmp2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-rcmp2,UID:ef87dcf4-2574-11ea-a994-fa163e34d433,ResourceVersion:15780702,Generation:0,CreationTimestamp:2019-12-23 11:11:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e359a7 0xc001e359a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e35a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.290: INFO: Pod "nginx-deployment-5c98f8fb5-rx8kr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rx8kr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-rx8kr,UID:f40b06b3-2574-11ea-a994-fa163e34d433,ResourceVersion:15780762,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e35be7 0xc001e35be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e35c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.290: INFO: Pod "nginx-deployment-5c98f8fb5-th5nn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-th5nn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-5c98f8fb5-th5nn,UID:ef4c4618-2574-11ea-a994-fa163e34d433,ResourceVersion:15780692,Generation:0,CreationTimestamp:2019-12-23 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ef1d61e4-2574-11ea-a994-fa163e34d433 0xc001e35ce7 0xc001e35ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e35e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e35e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.290: INFO: Pod "nginx-deployment-85ddf47c5d-29f62" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-29f62,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-29f62,UID:f40a2f74-2574-11ea-a994-fa163e34d433,ResourceVersion:15780751,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001e35ee7 0xc001e35ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.291: INFO: Pod "nginx-deployment-85ddf47c5d-2lwf4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2lwf4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-2lwf4,UID:d662ba35-2574-11ea-a994-fa163e34d433,ResourceVersion:15780615,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e137 0xc001a8e138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-23 11:10:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://906c35458f3d42690aee19ac26dba0d67553fe08eaf2a47a7d9f8a563bde9f03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.291: INFO: Pod "nginx-deployment-85ddf47c5d-2vmhh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2vmhh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-2vmhh,UID:f33591bf-2574-11ea-a994-fa163e34d433,ResourceVersion:15780737,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e337 0xc001a8e338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.291: INFO: Pod "nginx-deployment-85ddf47c5d-5xvfp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5xvfp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-5xvfp,UID:d653489a-2574-11ea-a994-fa163e34d433,ResourceVersion:15780592,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e4c7 0xc001a8e4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e530} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-23 11:10:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:10:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://368d2ae52adf07453f9f42db5ed427d5c0b73e811204abf671b3c09aff850aee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.291: INFO: Pod "nginx-deployment-85ddf47c5d-6c5rq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6c5rq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-6c5rq,UID:f335b43f-2574-11ea-a994-fa163e34d433,ResourceVersion:15780738,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e617 0xc001a8e618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e680} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.292: INFO: Pod "nginx-deployment-85ddf47c5d-8fvwn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8fvwn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-8fvwn,UID:d6575437-2574-11ea-a994-fa163e34d433,ResourceVersion:15780632,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e717 0xc001a8e718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-23 11:10:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e96f208084d4f4e50853d80c2507169c81213fcd6693c9f2e5357b663058ec6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.292: INFO: Pod "nginx-deployment-85ddf47c5d-9625m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9625m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-9625m,UID:f40a4de2-2574-11ea-a994-fa163e34d433,ResourceVersion:15780756,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e867 0xc001a8e868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8e900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8e930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.292: INFO: Pod "nginx-deployment-85ddf47c5d-9pvhw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9pvhw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-9pvhw,UID:f40a9409-2574-11ea-a994-fa163e34d433,ResourceVersion:15780754,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8e9a7 0xc001a8e9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8ea20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8ea40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.293: INFO: Pod "nginx-deployment-85ddf47c5d-cdwq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cdwq2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-cdwq2,UID:f2dc6dc4-2574-11ea-a994-fa163e34d433,ResourceVersion:15780768,Generation:0,CreationTimestamp:2019-12-23 11:11:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8eab7 0xc001a8eab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8eb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8eb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:11:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.293: INFO: Pod "nginx-deployment-85ddf47c5d-dtl45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dtl45,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-dtl45,UID:f335d8fc-2574-11ea-a994-fa163e34d433,ResourceVersion:15780734,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8ec37 0xc001a8ec38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8eca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8ecc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.293: INFO: Pod "nginx-deployment-85ddf47c5d-f2s4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f2s4t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-f2s4t,UID:f3350bc9-2574-11ea-a994-fa163e34d433,ResourceVersion:15780739,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8eff7 0xc001a8eff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f060} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.294: INFO: Pod "nginx-deployment-85ddf47c5d-fbfzc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fbfzc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-fbfzc,UID:f40b0aaf-2574-11ea-a994-fa163e34d433,ResourceVersion:15780761,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8f0f7 0xc001a8f0f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.294: INFO: Pod "nginx-deployment-85ddf47c5d-gqlgk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gqlgk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-gqlgk,UID:d6578268-2574-11ea-a994-fa163e34d433,ResourceVersion:15780621,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8f237 0xc001a8f238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-23 11:10:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6af2b7acf8ed355419fa3b61803b9f4c3e521b652b2bf7927034275e428b8e2b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.294: INFO: Pod "nginx-deployment-85ddf47c5d-jjf6g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjf6g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-jjf6g,UID:d67d03b1-2574-11ea-a994-fa163e34d433,ResourceVersion:15780636,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8f467 0xc001a8f468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f580} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-23 11:10:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:10:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e93ff9b2641d41b82feedbb6243902af802b48e5e89d0871c98b4a4fa9ca264f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.294: INFO: Pod "nginx-deployment-85ddf47c5d-rp9jc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rp9jc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-rp9jc,UID:f2fece67-2574-11ea-a994-fa163e34d433,ResourceVersion:15780730,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8f6b7 0xc001a8f6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.295: INFO: Pod "nginx-deployment-85ddf47c5d-s5vt6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s5vt6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-s5vt6,UID:d67d267f-2574-11ea-a994-fa163e34d433,ResourceVersion:15780624,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8f8b7 0xc001a8f8b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8f9b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8f9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-23 11:10:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b0115056faf521169503afc54aec947026e57db90b6aee37aa25e2ed591a02f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.295: INFO: Pod "nginx-deployment-85ddf47c5d-s8tg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s8tg4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-s8tg4,UID:f4095ab2-2574-11ea-a994-fa163e34d433,ResourceVersion:15780764,Generation:0,CreationTimestamp:2019-12-23 11:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8fa97 0xc001a8fa98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8fb70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8fb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.296: INFO: Pod "nginx-deployment-85ddf47c5d-vq8cf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vq8cf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-vq8cf,UID:d6621c63-2574-11ea-a994-fa163e34d433,ResourceVersion:15780643,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8fc07 0xc001a8fc08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8fd20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8fd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-23 11:10:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8f4a74cf8f1a232db44c2e87c7cd19c84cb9e47ea2f6ae5761105580087e1d3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.296: INFO: Pod "nginx-deployment-85ddf47c5d-wqtwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wqtwb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-wqtwb,UID:f2febf51-2574-11ea-a994-fa163e34d433,ResourceVersion:15780726,Generation:0,CreationTimestamp:2019-12-23 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8fe07 0xc001a8fe08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8fee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a8ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 23 11:11:24.296: INFO: Pod "nginx-deployment-85ddf47c5d-xdxgp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xdxgp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-q2js5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2js5/pods/nginx-deployment-85ddf47c5d-xdxgp,UID:d662c7ca-2574-11ea-a994-fa163e34d433,ResourceVersion:15780640,Generation:0,CreationTimestamp:2019-12-23 11:10:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d64174c4-2574-11ea-a994-fa163e34d433 0xc001a8ff77 0xc001a8ff78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rdgns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rdgns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rdgns true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a8ffe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009ce010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:11:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:10:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-23 11:10:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 11:11:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f1d740e6c2739f490e43df4f293795138671003c6087da34d3fb2c95940fdfe6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:11:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-q2js5" for this suite. Dec 23 11:12:19.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:12:20.191: INFO: namespace: e2e-tests-deployment-q2js5, resource: bindings, ignored listing per whitelist Dec 23 11:12:20.223: INFO: namespace e2e-tests-deployment-q2js5 deletion completed in 53.32753679s • [SLOW TEST:107.525 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:12:20.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:12:21.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-lhxzc" to be "success or failure" Dec 23 11:12:21.519: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 257.732774ms Dec 23 11:12:24.008: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.747287965s Dec 23 11:12:26.041: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.7804096s Dec 23 11:12:28.056: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79479331s Dec 23 11:12:30.134: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.873250686s Dec 23 11:12:32.352: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.090591176s Dec 23 11:12:34.388: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.127144363s Dec 23 11:12:36.923: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.661952428s Dec 23 11:12:38.944: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.683245932s Dec 23 11:12:41.160: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.898735355s Dec 23 11:12:43.175: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.913960975s STEP: Saw pod success Dec 23 11:12:43.175: INFO: Pod "downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:12:43.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:12:44.766: INFO: Waiting for pod downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:12:44.823: INFO: Pod downwardapi-volume-16ad92f7-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:12:44.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lhxzc" for this suite. Dec 23 11:12:50.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:12:51.184: INFO: namespace: e2e-tests-downward-api-lhxzc, resource: bindings, ignored listing per whitelist Dec 23 11:12:51.215: INFO: namespace e2e-tests-downward-api-lhxzc deletion completed in 6.284889941s • [SLOW TEST:30.991 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:12:51.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jfgkc Dec 23 11:13:01.601: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jfgkc STEP: checking the pod's current state and verifying that restartCount is present Dec 23 11:13:01.607: INFO: Initial restart count of pod liveness-http is 0 Dec 23 11:13:28.607: INFO: Restart count of pod e2e-tests-container-probe-jfgkc/liveness-http is now 1 (26.999636995s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:13:28.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jfgkc" for this suite. Dec 23 11:13:34.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:13:35.128: INFO: namespace: e2e-tests-container-probe-jfgkc, resource: bindings, ignored listing per whitelist Dec 23 11:13:35.178: INFO: namespace e2e-tests-container-probe-jfgkc deletion completed in 6.4292879s • [SLOW TEST:43.961 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:13:35.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-42fe03e9-2575-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:13:35.433: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-bjtpp" to be "success or failure" Dec 23 11:13:35.439: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175398ms Dec 23 11:13:37.613: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180085968s Dec 23 11:13:39.629: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196441747s Dec 23 11:13:41.708: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275263257s Dec 23 11:13:43.718: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285393885s Dec 23 11:13:45.731: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298008759s STEP: Saw pod success Dec 23 11:13:45.731: INFO: Pod "pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:13:45.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 23 11:13:46.723: INFO: Waiting for pod pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:13:47.636: INFO: Pod pod-projected-configmaps-43069ab4-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:13:47.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bjtpp" for this suite. Dec 23 11:13:53.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:13:54.049: INFO: namespace: e2e-tests-projected-bjtpp, resource: bindings, ignored listing per whitelist Dec 23 11:13:54.095: INFO: namespace e2e-tests-projected-bjtpp deletion completed in 6.394378396s • [SLOW TEST:18.916 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:13:54.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1223 11:14:25.487816 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 11:14:25.488: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:14:25.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-g9wkq" for this suite. Dec 23 11:14:33.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:14:33.816: INFO: namespace: e2e-tests-gc-g9wkq, resource: bindings, ignored listing per whitelist Dec 23 11:14:33.838: INFO: namespace e2e-tests-gc-g9wkq deletion completed in 8.281070242s • [SLOW TEST:39.743 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:14:33.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:14:48.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-mlwjc" for this suite. Dec 23 11:14:56.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:14:56.265: INFO: namespace: e2e-tests-kubelet-test-mlwjc, resource: bindings, ignored listing per whitelist Dec 23 11:14:56.308: INFO: namespace e2e-tests-kubelet-test-mlwjc deletion completed in 8.16763179s • [SLOW TEST:22.470 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:14:56.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-737504db-2575-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:14:56.711: INFO: Waiting up to 5m0s for pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-l27r5" to be "success or failure" Dec 23 11:14:56.883: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 171.333383ms Dec 23 11:14:59.148: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435886777s Dec 23 11:15:01.173: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46182806s Dec 23 11:15:03.228: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516276998s Dec 23 11:15:05.240: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528311908s Dec 23 11:15:07.252: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540480593s STEP: Saw pod success Dec 23 11:15:07.252: INFO: Pod "pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:15:07.258: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 23 11:15:07.616: INFO: Waiting for pod pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:15:07.661: INFO: Pod pod-configmaps-7377e102-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:15:07.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l27r5" for this suite. Dec 23 11:15:13.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:15:13.912: INFO: namespace: e2e-tests-configmap-l27r5, resource: bindings, ignored listing per whitelist Dec 23 11:15:14.067: INFO: namespace e2e-tests-configmap-l27r5 deletion completed in 6.391150306s • [SLOW TEST:17.758 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:15:14.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:15:25.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-swm48" for this suite. Dec 23 11:15:49.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:15:49.720: INFO: namespace: e2e-tests-replication-controller-swm48, resource: bindings, ignored listing per whitelist Dec 23 11:15:49.767: INFO: namespace e2e-tests-replication-controller-swm48 deletion completed in 24.19479562s • [SLOW TEST:35.699 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:15:49.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 23 11:15:50.292: INFO: Waiting up to 5m0s for pod "pod-93675de7-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-wgd4r" to be "success or failure" Dec 23 11:15:50.306: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.539797ms Dec 23 11:15:52.329: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036805659s Dec 23 11:15:54.345: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052069318s Dec 23 11:15:56.384: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091340433s Dec 23 11:15:58.412: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119268396s Dec 23 11:16:00.430: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137739666s STEP: Saw pod success Dec 23 11:16:00.431: INFO: Pod "pod-93675de7-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:16:00.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-93675de7-2575-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:16:00.651: INFO: Waiting for pod pod-93675de7-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:16:00.727: INFO: Pod pod-93675de7-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:16:00.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wgd4r" for this suite. Dec 23 11:16:06.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:16:06.964: INFO: namespace: e2e-tests-emptydir-wgd4r, resource: bindings, ignored listing per whitelist Dec 23 11:16:06.981: INFO: namespace e2e-tests-emptydir-wgd4r deletion completed in 6.241913136s • [SLOW TEST:17.214 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:16:06.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 23 11:16:07.725: INFO: Waiting up to 5m0s for pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz" in namespace "e2e-tests-svcaccounts-fqfqf" to be "success or failure" Dec 23 11:16:07.795: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 69.754724ms Dec 23 11:16:09.810: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084787983s Dec 23 11:16:11.845: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11994976s Dec 23 11:16:14.155: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429481252s Dec 23 11:16:16.864: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.138479289s Dec 23 11:16:18.917: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.19218406s Dec 23 11:16:20.935: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.209738303s Dec 23 11:16:22.950: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Running", Reason="", readiness=false. Elapsed: 15.225130301s Dec 23 11:16:24.971: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.246183615s STEP: Saw pod success Dec 23 11:16:24.971: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz" satisfied condition "success or failure" Dec 23 11:16:24.976: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz container token-test: STEP: delete the pod Dec 23 11:16:25.679: INFO: Waiting for pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz to disappear Dec 23 11:16:25.858: INFO: Pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k9wlz no longer exists STEP: Creating a pod to test consume service account root CA Dec 23 11:16:26.167: INFO: Waiting up to 5m0s for pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh" in namespace "e2e-tests-svcaccounts-fqfqf" to be "success or failure" Dec 23 11:16:26.189: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 20.957087ms Dec 23 11:16:28.200: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032413963s Dec 23 11:16:30.214: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046096127s Dec 23 11:16:32.240: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072245817s Dec 23 11:16:34.553: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385598482s Dec 23 11:16:36.596: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.428053792s Dec 23 11:16:38.648: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.479871241s Dec 23 11:16:40.666: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.497943581s STEP: Saw pod success Dec 23 11:16:40.666: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh" satisfied condition "success or failure" Dec 23 11:16:40.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh container root-ca-test: STEP: delete the pod Dec 23 11:16:40.957: INFO: Waiting for pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh to disappear Dec 23 11:16:40.966: INFO: Pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-8l5xh no longer exists STEP: Creating a pod to test consume service account namespace Dec 23 11:16:40.994: INFO: Waiting up to 5m0s for pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml" in namespace "e2e-tests-svcaccounts-fqfqf" to be "success or failure" Dec 23 11:16:41.020: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 25.742788ms Dec 23 11:16:43.077: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08331432s Dec 23 11:16:45.108: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113574276s Dec 23 11:16:47.217: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22329826s Dec 23 11:16:49.286: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292220469s Dec 23 11:16:51.300: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306378656s Dec 23 11:16:53.322: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 12.328492682s Dec 23 11:16:55.654: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 14.660265361s Dec 23 11:16:57.666: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Pending", Reason="", readiness=false. Elapsed: 16.672467196s Dec 23 11:16:59.680: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.685605059s STEP: Saw pod success Dec 23 11:16:59.680: INFO: Pod "pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml" satisfied condition "success or failure" Dec 23 11:16:59.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml container namespace-test: STEP: delete the pod Dec 23 11:17:00.410: INFO: Waiting for pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml to disappear Dec 23 11:17:00.437: INFO: Pod pod-service-account-9dcb02cb-2575-11ea-a9d2-0242ac110005-k8nml no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:17:00.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-fqfqf" for this suite. Dec 23 11:17:08.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:17:08.562: INFO: namespace: e2e-tests-svcaccounts-fqfqf, resource: bindings, ignored listing per whitelist Dec 23 11:17:08.666: INFO: namespace e2e-tests-svcaccounts-fqfqf deletion completed in 8.216348938s • [SLOW TEST:61.685 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:17:08.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:17:08.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-q2ncb" to be "success or failure" Dec 23 11:17:08.913: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.43487ms Dec 23 11:17:11.241: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337762304s Dec 23 11:17:13.261: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357810377s Dec 23 11:17:15.732: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.828236949s Dec 23 11:17:17.751: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847268164s Dec 23 11:17:19.769: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.865871551s STEP: Saw pod success Dec 23 11:17:19.770: INFO: Pod "downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:17:19.776: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:17:20.464: INFO: Waiting for pod downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:17:20.475: INFO: Pod downwardapi-volume-c243b33d-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:17:20.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q2ncb" for this suite. Dec 23 11:17:26.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:17:26.844: INFO: namespace: e2e-tests-projected-q2ncb, resource: bindings, ignored listing per whitelist Dec 23 11:17:26.898: INFO: namespace e2e-tests-projected-q2ncb deletion completed in 6.397173674s • [SLOW TEST:18.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:17:26.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 23 11:17:27.126: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 23 11:17:27.138: INFO: Waiting for terminating namespaces to be deleted... Dec 23 11:17:27.142: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 23 11:17:27.154: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 23 11:17:27.154: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 23 11:17:27.154: INFO: Container coredns ready: true, restart count 0 Dec 23 11:17:27.154: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 23 11:17:27.154: INFO: Container kube-proxy ready: true, restart count 0 Dec 23 11:17:27.154: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 23 11:17:27.154: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 23 11:17:27.154: INFO: Container weave ready: true, restart count 0 Dec 23 11:17:27.154: INFO: Container weave-npc ready: true, restart count 0 Dec 23 11:17:27.154: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 23 11:17:27.154: INFO: Container coredns ready: true, restart count 0 Dec 23 11:17:27.154: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 23 11:17:27.154: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e2fbf48795a592], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:17:28.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qz9h2" for this suite. Dec 23 11:17:34.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:17:34.281: INFO: namespace: e2e-tests-sched-pred-qz9h2, resource: bindings, ignored listing per whitelist Dec 23 11:17:34.364: INFO: namespace e2e-tests-sched-pred-qz9h2 deletion completed in 6.161204776s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.465 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:17:34.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d198cac9-2575-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:17:34.730: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-75jgb" to be "success or failure" Dec 23 11:17:34.779: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.556711ms Dec 23 11:17:36.942: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212510881s Dec 23 11:17:38.975: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245257588s Dec 23 11:17:41.345: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61521801s Dec 23 11:17:43.362: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631686503s Dec 23 11:17:46.237: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.506637809s STEP: Saw pod success Dec 23 11:17:46.237: INFO: Pod "pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:17:46.262: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 23 11:17:46.653: INFO: Waiting for pod pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:17:46.668: INFO: Pod pod-projected-configmaps-d1a22e7a-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:17:46.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-75jgb" for this suite. Dec 23 11:17:52.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:17:52.738: INFO: namespace: e2e-tests-projected-75jgb, resource: bindings, ignored listing per whitelist Dec 23 11:17:52.827: INFO: namespace e2e-tests-projected-75jgb deletion completed in 6.151688128s • [SLOW TEST:18.462 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:17:52.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 23 11:17:53.079: INFO: Waiting up to 5m0s for pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-wpv9f" to be "success or failure" Dec 23 11:17:53.095: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88646ms Dec 23 11:17:55.699: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619745487s Dec 23 11:17:57.721: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641572066s Dec 23 11:18:00.004: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.924355048s Dec 23 11:18:02.692: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.612385395s Dec 23 11:18:04.714: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.633948827s STEP: Saw pod success Dec 23 11:18:04.714: INFO: Pod "pod-dc99f6d2-2575-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:18:04.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dc99f6d2-2575-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:18:04.938: INFO: Waiting for pod pod-dc99f6d2-2575-11ea-a9d2-0242ac110005 to disappear Dec 23 11:18:04.955: INFO: Pod pod-dc99f6d2-2575-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:18:04.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wpv9f" for this suite. Dec 23 11:18:11.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:18:11.049: INFO: namespace: e2e-tests-emptydir-wpv9f, resource: bindings, ignored listing per whitelist Dec 23 11:18:11.215: INFO: namespace e2e-tests-emptydir-wpv9f deletion completed in 6.250262873s • [SLOW TEST:18.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:18:11.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 23 11:18:11.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:13.668: INFO: stderr: "" Dec 23 11:18:13.668: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 23 11:18:13.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:13.949: INFO: stderr: "" Dec 23 11:18:13.949: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " Dec 23 11:18:13.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:14.179: INFO: stderr: "" Dec 23 11:18:14.179: INFO: stdout: "" Dec 23 11:18:14.179: INFO: update-demo-nautilus-dz7jx is created but not running Dec 23 11:18:19.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:19.360: INFO: stderr: "" Dec 23 11:18:19.360: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " Dec 23 11:18:19.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:19.512: INFO: stderr: "" Dec 23 11:18:19.512: INFO: stdout: "" Dec 23 11:18:19.512: INFO: update-demo-nautilus-dz7jx is created but not running Dec 23 11:18:24.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:24.676: INFO: stderr: "" Dec 23 11:18:24.676: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " Dec 23 11:18:24.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:24.825: INFO: stderr: "" Dec 23 11:18:24.826: INFO: stdout: "" Dec 23 11:18:24.826: INFO: update-demo-nautilus-dz7jx is created but not running Dec 23 11:18:29.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:30.106: INFO: stderr: "" Dec 23 11:18:30.106: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " Dec 23 11:18:30.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:30.203: INFO: stderr: "" Dec 23 11:18:30.203: INFO: stdout: "true" Dec 23 11:18:30.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:30.322: INFO: stderr: "" Dec 23 11:18:30.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 11:18:30.322: INFO: validating pod update-demo-nautilus-dz7jx Dec 23 11:18:30.363: INFO: got data: { "image": "nautilus.jpg" } Dec 23 11:18:30.364: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 11:18:30.364: INFO: update-demo-nautilus-dz7jx is verified up and running Dec 23 11:18:30.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t28xc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:30.523: INFO: stderr: "" Dec 23 11:18:30.523: INFO: stdout: "true" Dec 23 11:18:30.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t28xc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:30.661: INFO: stderr: "" Dec 23 11:18:30.661: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 11:18:30.661: INFO: validating pod update-demo-nautilus-t28xc Dec 23 11:18:30.704: INFO: got data: { "image": "nautilus.jpg" } Dec 23 11:18:30.704: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 11:18:30.704: INFO: update-demo-nautilus-t28xc is verified up and running STEP: scaling down the replication controller Dec 23 11:18:30.706: INFO: scanned /root for discovery docs: Dec 23 11:18:30.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:31.971: INFO: stderr: "" Dec 23 11:18:31.972: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 23 11:18:31.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:32.209: INFO: stderr: "" Dec 23 11:18:32.209: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 23 11:18:37.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:37.389: INFO: stderr: "" Dec 23 11:18:37.390: INFO: stdout: "update-demo-nautilus-dz7jx update-demo-nautilus-t28xc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 23 11:18:42.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:42.753: INFO: stderr: "" Dec 23 11:18:42.753: INFO: stdout: "update-demo-nautilus-dz7jx " Dec 23 11:18:42.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:42.995: INFO: stderr: "" Dec 23 11:18:42.996: INFO: stdout: "true" Dec 23 11:18:42.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:43.139: INFO: stderr: "" Dec 23 11:18:43.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 11:18:43.140: INFO: validating pod update-demo-nautilus-dz7jx Dec 23 11:18:43.148: INFO: got data: { "image": "nautilus.jpg" } Dec 23 11:18:43.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 11:18:43.148: INFO: update-demo-nautilus-dz7jx is verified up and running STEP: scaling up the replication controller Dec 23 11:18:43.150: INFO: scanned /root for discovery docs: Dec 23 11:18:43.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:45.051: INFO: stderr: "" Dec 23 11:18:45.051: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 23 11:18:45.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:45.508: INFO: stderr: "" Dec 23 11:18:45.509: INFO: stdout: "update-demo-nautilus-85vwz update-demo-nautilus-dz7jx " Dec 23 11:18:45.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:45.722: INFO: stderr: "" Dec 23 11:18:45.722: INFO: stdout: "" Dec 23 11:18:45.722: INFO: update-demo-nautilus-85vwz is created but not running Dec 23 11:18:50.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:50.870: INFO: stderr: "" Dec 23 11:18:50.871: INFO: stdout: "update-demo-nautilus-85vwz update-demo-nautilus-dz7jx " Dec 23 11:18:50.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:50.985: INFO: stderr: "" Dec 23 11:18:50.985: INFO: stdout: "" Dec 23 11:18:50.985: INFO: update-demo-nautilus-85vwz is created but not running Dec 23 11:18:55.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:56.195: INFO: stderr: "" Dec 23 11:18:56.195: INFO: stdout: "update-demo-nautilus-85vwz update-demo-nautilus-dz7jx " Dec 23 11:18:56.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:56.375: INFO: stderr: "" Dec 23 11:18:56.375: INFO: stdout: "true" Dec 23 11:18:56.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85vwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:56.590: INFO: stderr: "" Dec 23 11:18:56.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 11:18:56.591: INFO: validating pod update-demo-nautilus-85vwz Dec 23 11:18:56.612: INFO: got data: { "image": "nautilus.jpg" } Dec 23 11:18:56.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 11:18:56.613: INFO: update-demo-nautilus-85vwz is verified up and running Dec 23 11:18:56.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:56.797: INFO: stderr: "" Dec 23 11:18:56.798: INFO: stdout: "true" Dec 23 11:18:56.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dz7jx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:56.974: INFO: stderr: "" Dec 23 11:18:56.974: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 11:18:56.974: INFO: validating pod update-demo-nautilus-dz7jx Dec 23 11:18:56.982: INFO: got data: { "image": "nautilus.jpg" } Dec 23 11:18:56.982: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 11:18:56.982: INFO: update-demo-nautilus-dz7jx is verified up and running STEP: using delete to clean up resources Dec 23 11:18:56.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:57.135: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 11:18:57.135: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 23 11:18:57.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7ft9v' Dec 23 11:18:57.403: INFO: stderr: "No resources found.\n" Dec 23 11:18:57.404: INFO: stdout: "" Dec 23 11:18:57.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7ft9v -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 11:18:57.544: INFO: stderr: "" Dec 23 11:18:57.545: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:18:57.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7ft9v" for this suite. Dec 23 11:19:20.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:19:20.834: INFO: namespace: e2e-tests-kubectl-7ft9v, resource: bindings, ignored listing per whitelist Dec 23 11:19:20.906: INFO: namespace e2e-tests-kubectl-7ft9v deletion completed in 22.855411536s • [SLOW TEST:69.691 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:19:20.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-plpnj [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 23 11:19:21.117: INFO: Found 0 stateful pods, waiting for 3 Dec 23 11:19:31.146: INFO: Found 2 stateful pods, waiting for 3 Dec 23 11:19:41.141: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 11:19:41.141: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 11:19:41.141: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 23 11:19:51.179: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 11:19:51.179: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 11:19:51.179: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 23 11:19:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-plpnj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 11:19:51.770: INFO: stderr: "" Dec 23 11:19:51.770: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 11:19:51.770: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 23 11:20:01.987: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 23 11:20:12.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-plpnj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 11:20:12.896: INFO: stderr: "" Dec 23 11:20:12.896: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 11:20:12.896: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 11:20:23.268: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:20:23.268: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:20:23.268: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:20:33.301: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:20:33.302: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:20:33.302: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:20:43.310: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:20:43.310: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:20:53.936: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:20:53.937: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 11:21:03.310: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update STEP: Rolling back to a previous revision Dec 23 11:21:13.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-plpnj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 11:21:14.090: INFO: stderr: "" Dec 23 11:21:14.091: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 11:21:14.091: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 11:21:24.189: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 23 11:21:34.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-plpnj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 11:21:35.114: INFO: stderr: "" Dec 23 11:21:35.115: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 11:21:35.115: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 11:21:45.180: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:21:45.180: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:21:45.180: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:21:55.210: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:21:55.211: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:21:55.211: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:22:05.229: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:22:05.229: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:22:15.213: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update Dec 23 11:22:15.213: INFO: Waiting for Pod e2e-tests-statefulset-plpnj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 11:22:25.209: INFO: Waiting for StatefulSet e2e-tests-statefulset-plpnj/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 23 11:22:35.229: INFO: Deleting all statefulset in ns e2e-tests-statefulset-plpnj Dec 23 11:22:35.241: INFO: Scaling statefulset ss2 to 0 Dec 23 11:23:05.337: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 11:23:05.345: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:23:05.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-plpnj" for this suite. Dec 23 11:23:13.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:23:13.636: INFO: namespace: e2e-tests-statefulset-plpnj, resource: bindings, ignored listing per whitelist Dec 23 11:23:13.745: INFO: namespace e2e-tests-statefulset-plpnj deletion completed in 8.318395408s • [SLOW TEST:232.838 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:23:13.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9be17708-2576-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:23:14.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-rldct" to be "success or failure" Dec 23 11:23:14.082: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.885571ms Dec 23 11:23:16.152: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080502037s Dec 23 11:23:18.170: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097824374s Dec 23 11:23:20.214: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142472107s Dec 23 11:23:22.499: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427429768s Dec 23 11:23:24.531: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459649392s STEP: Saw pod success Dec 23 11:23:24.532: INFO: Pod "pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:23:24.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 23 11:23:24.882: INFO: Waiting for pod pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005 to disappear Dec 23 11:23:24.895: INFO: Pod pod-configmaps-9be374a4-2576-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:23:24.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rldct" for this suite. Dec 23 11:23:30.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:23:31.014: INFO: namespace: e2e-tests-configmap-rldct, resource: bindings, ignored listing per whitelist Dec 23 11:23:31.058: INFO: namespace e2e-tests-configmap-rldct deletion completed in 6.153979146s • [SLOW TEST:17.313 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:23:31.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:23:31.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2djrz" for this suite. Dec 23 11:23:53.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:23:53.572: INFO: namespace: e2e-tests-kubelet-test-2djrz, resource: bindings, ignored listing per whitelist Dec 23 11:23:53.667: INFO: namespace e2e-tests-kubelet-test-2djrz deletion completed in 22.221799957s • [SLOW TEST:22.608 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:23:53.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:24:04.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h6pcs" for this suite. Dec 23 11:24:48.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:24:48.342: INFO: namespace: e2e-tests-kubelet-test-h6pcs, resource: bindings, ignored listing per whitelist Dec 23 11:24:48.370: INFO: namespace e2e-tests-kubelet-test-h6pcs deletion completed in 44.235522053s • [SLOW TEST:54.703 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:24:48.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:24:59.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9q2tz" for this suite. Dec 23 11:25:05.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:25:05.287: INFO: namespace: e2e-tests-emptydir-wrapper-9q2tz, resource: bindings, ignored listing per whitelist Dec 23 11:25:05.323: INFO: namespace e2e-tests-emptydir-wrapper-9q2tz deletion completed in 6.293859483s • [SLOW TEST:16.953 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:25:05.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:25:05.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-4nbb9" to be "success or failure" Dec 23 11:25:05.893: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 120.177191ms Dec 23 11:25:07.911: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138999543s Dec 23 11:25:09.939: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166044999s Dec 23 11:25:12.174: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401723816s Dec 23 11:25:14.200: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427805292s Dec 23 11:25:16.221: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.448138857s STEP: Saw pod success Dec 23 11:25:16.221: INFO: Pod "downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:25:16.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:25:16.339: INFO: Waiting for pod downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005 to disappear Dec 23 11:25:16.409: INFO: Pod downwardapi-volume-de7bfc1f-2576-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:25:16.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4nbb9" for this suite. Dec 23 11:25:22.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:25:22.548: INFO: namespace: e2e-tests-projected-4nbb9, resource: bindings, ignored listing per whitelist Dec 23 11:25:22.795: INFO: namespace e2e-tests-projected-4nbb9 deletion completed in 6.363271791s • [SLOW TEST:17.471 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:25:22.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:25:23.061: INFO: Creating deployment "test-recreate-deployment" Dec 23 11:25:23.077: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 23 11:25:23.107: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 23 11:25:25.145: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 23 11:25:25.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:25:27.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:25:29.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:25:31.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712697123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 11:25:33.167: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 23 11:25:33.183: INFO: Updating deployment test-recreate-deployment Dec 23 11:25:33.183: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 23 11:25:33.724: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jbwmg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jbwmg/deployments/test-recreate-deployment,UID:e8d222e4-2576-11ea-a994-fa163e34d433,ResourceVersion:15782946,Generation:2,CreationTimestamp:2019-12-23 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-23 11:25:33 +0000 UTC 2019-12-23 11:25:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-23 11:25:33 +0000 UTC 2019-12-23 11:25:23 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 23 11:25:33.744: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jbwmg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jbwmg/replicasets/test-recreate-deployment-589c4bfd,UID:eefc620c-2576-11ea-a994-fa163e34d433,ResourceVersion:15782944,Generation:1,CreationTimestamp:2019-12-23 11:25:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8d222e4-2576-11ea-a994-fa163e34d433 0xc001a8fd0f 0xc001a8fd20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 23 11:25:33.745: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 23 11:25:33.745: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jbwmg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jbwmg/replicasets/test-recreate-deployment-5bf7f65dc,UID:e8d58b35-2576-11ea-a994-fa163e34d433,ResourceVersion:15782935,Generation:2,CreationTimestamp:2019-12-23 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8d222e4-2576-11ea-a994-fa163e34d433 0xc001a8fde0 0xc001a8fde1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 23 11:25:33.816: INFO: Pod "test-recreate-deployment-589c4bfd-svprv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-svprv,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jbwmg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jbwmg/pods/test-recreate-deployment-589c4bfd-svprv,UID:ef0687ee-2576-11ea-a994-fa163e34d433,ResourceVersion:15782947,Generation:0,CreationTimestamp:2019-12-23 11:25:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd eefc620c-2576-11ea-a994-fa163e34d433 0xc001ac0b6f 0xc001ac0b80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-28qr5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-28qr5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-28qr5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ac0c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ac0c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:25:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:25:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:25:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 11:25:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-23 11:25:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:25:33.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-jbwmg" for this suite. Dec 23 11:25:40.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:25:40.934: INFO: namespace: e2e-tests-deployment-jbwmg, resource: bindings, ignored listing per whitelist Dec 23 11:25:40.946: INFO: namespace e2e-tests-deployment-jbwmg deletion completed in 7.10323817s • [SLOW TEST:18.151 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:25:40.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-f3ac4035-2576-11ea-a9d2-0242ac110005 STEP: Creating secret with name s-test-opt-upd-f3ac40fd-2576-11ea-a9d2-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f3ac4035-2576-11ea-a9d2-0242ac110005 STEP: Updating secret s-test-opt-upd-f3ac40fd-2576-11ea-a9d2-0242ac110005 STEP: Creating secret with name s-test-opt-create-f3ac411b-2576-11ea-a9d2-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:26:01.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mm7g4" for this suite. Dec 23 11:26:25.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:26:25.410: INFO: namespace: e2e-tests-projected-mm7g4, resource: bindings, ignored listing per whitelist Dec 23 11:26:25.486: INFO: namespace e2e-tests-projected-mm7g4 deletion completed in 24.332636739s • [SLOW TEST:44.540 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:26:25.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-0e3109f1-2577-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:26:25.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-zplm8" to be "success or failure" Dec 23 11:26:25.917: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.974932ms Dec 23 11:26:28.088: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186366127s Dec 23 11:26:30.117: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21577377s Dec 23 11:26:32.256: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354856417s Dec 23 11:26:34.276: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37431346s Dec 23 11:26:36.315: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.413505788s STEP: Saw pod success Dec 23 11:26:36.315: INFO: Pod "pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:26:36.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 23 11:26:36.457: INFO: Waiting for pod pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:26:36.482: INFO: Pod pod-projected-configmaps-0e4095f2-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:26:36.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zplm8" for this suite. Dec 23 11:26:42.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:26:42.774: INFO: namespace: e2e-tests-projected-zplm8, resource: bindings, ignored listing per whitelist Dec 23 11:26:42.816: INFO: namespace e2e-tests-projected-zplm8 deletion completed in 6.313801113s • [SLOW TEST:17.329 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:26:42.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 23 11:26:43.150: INFO: Waiting up to 5m0s for pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-var-expansion-88gdb" to be "success or failure" Dec 23 11:26:43.160: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.87955ms Dec 23 11:26:45.344: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193233982s Dec 23 11:26:47.366: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215385798s Dec 23 11:26:49.388: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237913546s Dec 23 11:26:51.406: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255067668s Dec 23 11:26:53.419: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.268107789s STEP: Saw pod success Dec 23 11:26:53.419: INFO: Pod "var-expansion-18897b39-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:26:54.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-18897b39-2577-11ea-a9d2-0242ac110005 container dapi-container: STEP: delete the pod Dec 23 11:26:54.334: INFO: Waiting for pod var-expansion-18897b39-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:26:54.354: INFO: Pod var-expansion-18897b39-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:26:54.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-88gdb" for this suite. Dec 23 11:27:00.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:27:00.635: INFO: namespace: e2e-tests-var-expansion-88gdb, resource: bindings, ignored listing per whitelist Dec 23 11:27:00.645: INFO: namespace e2e-tests-var-expansion-88gdb deletion completed in 6.271258116s • [SLOW TEST:17.828 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:27:00.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2322a237-2577-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 23 11:27:00.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-59x2z" to be "success or failure" Dec 23 11:27:01.056: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 125.538598ms Dec 23 11:27:03.070: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138678542s Dec 23 11:27:05.099: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168219026s Dec 23 11:27:07.124: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193356908s Dec 23 11:27:09.136: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204703614s Dec 23 11:27:11.399: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.468606508s Dec 23 11:27:13.498: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.567488318s STEP: Saw pod success Dec 23 11:27:13.499: INFO: Pod "pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:27:13.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 23 11:27:13.787: INFO: Waiting for pod pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:27:13.810: INFO: Pod pod-configmaps-23239512-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:27:13.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-59x2z" for this suite. Dec 23 11:27:19.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:27:19.938: INFO: namespace: e2e-tests-configmap-59x2z, resource: bindings, ignored listing per whitelist Dec 23 11:27:20.004: INFO: namespace e2e-tests-configmap-59x2z deletion completed in 6.185741743s • [SLOW TEST:19.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:27:20.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 23 11:27:20.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 23 11:27:20.551: INFO: stderr: "" Dec 23 11:27:20.552: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:27:20.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-79pss" for this suite. Dec 23 11:27:26.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:27:26.776: INFO: namespace: e2e-tests-kubectl-79pss, resource: bindings, ignored listing per whitelist Dec 23 11:27:26.938: INFO: namespace e2e-tests-kubectl-79pss deletion completed in 6.369475506s • [SLOW TEST:6.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:27:26.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 23 11:27:37.469: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:28:03.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cd8vw" for this suite. Dec 23 11:28:09.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:28:09.613: INFO: namespace: e2e-tests-namespaces-cd8vw, resource: bindings, ignored listing per whitelist Dec 23 11:28:09.702: INFO: namespace e2e-tests-namespaces-cd8vw deletion completed in 6.23644876s STEP: Destroying namespace "e2e-tests-nsdeletetest-csdgf" for this suite. Dec 23 11:28:09.706: INFO: Namespace e2e-tests-nsdeletetest-csdgf was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-5g2fm" for this suite. Dec 23 11:28:15.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:28:15.901: INFO: namespace: e2e-tests-nsdeletetest-5g2fm, resource: bindings, ignored listing per whitelist Dec 23 11:28:15.916: INFO: namespace e2e-tests-nsdeletetest-5g2fm deletion completed in 6.209945777s • [SLOW TEST:48.977 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:28:15.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4ffb70cb-2577-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 11:28:16.251: INFO: Waiting up to 5m0s for pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-xmr5n" to be "success or failure" Dec 23 11:28:16.296: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.991723ms Dec 23 11:28:18.805: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553348366s Dec 23 11:28:20.875: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622811189s Dec 23 11:28:22.958: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.70671141s Dec 23 11:28:25.559: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.306896178s Dec 23 11:28:27.708: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.456473893s STEP: Saw pod success Dec 23 11:28:27.708: INFO: Pod "pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:28:27.718: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 23 11:28:28.074: INFO: Waiting for pod pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:28:28.093: INFO: Pod pod-secrets-4ffc6025-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:28:28.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xmr5n" for this suite. Dec 23 11:28:34.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:28:34.218: INFO: namespace: e2e-tests-secrets-xmr5n, resource: bindings, ignored listing per whitelist Dec 23 11:28:34.287: INFO: namespace e2e-tests-secrets-xmr5n deletion completed in 6.142220208s • [SLOW TEST:18.371 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:28:34.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005 Dec 23 11:28:34.628: INFO: Pod name my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005: Found 0 pods out of 1 Dec 23 11:28:39.645: INFO: Pod name my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005: Found 1 pods out of 1 Dec 23 11:28:39.645: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005" are running Dec 23 11:28:45.707: INFO: Pod "my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005-5sbdw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:28:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:28:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:28:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:28:34 +0000 UTC Reason: Message:}]) Dec 23 11:28:45.707: INFO: Trying to dial the pod Dec 23 11:28:50.924: INFO: Controller my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005: Got expected result from replica 1 [my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005-5sbdw]: "my-hostname-basic-5af82030-2577-11ea-a9d2-0242ac110005-5sbdw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:28:50.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-4hdxv" for this suite. Dec 23 11:28:59.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:28:59.721: INFO: namespace: e2e-tests-replication-controller-4hdxv, resource: bindings, ignored listing per whitelist Dec 23 11:28:59.772: INFO: namespace e2e-tests-replication-controller-4hdxv deletion completed in 8.83880152s • [SLOW TEST:25.484 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:28:59.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 23 11:29:00.532: INFO: Number of nodes with available pods: 0 Dec 23 11:29:00.533: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:01.549: INFO: Number of nodes with available pods: 0 Dec 23 11:29:01.549: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:02.657: INFO: Number of nodes with available pods: 0 Dec 23 11:29:02.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:03.558: INFO: Number of nodes with available pods: 0 Dec 23 11:29:03.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:04.562: INFO: Number of nodes with available pods: 0 Dec 23 11:29:04.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:05.694: INFO: Number of nodes with available pods: 0 Dec 23 11:29:05.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:07.134: INFO: Number of nodes with available pods: 0 Dec 23 11:29:07.134: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:07.561: INFO: Number of nodes with available pods: 0 Dec 23 11:29:07.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:08.607: INFO: Number of nodes with available pods: 0 Dec 23 11:29:08.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:09.558: INFO: Number of nodes with available pods: 0 Dec 23 11:29:09.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:10.577: INFO: Number of nodes with available pods: 1 Dec 23 11:29:10.577: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 23 11:29:10.684: INFO: Number of nodes with available pods: 0 Dec 23 11:29:10.684: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:11.703: INFO: Number of nodes with available pods: 0 Dec 23 11:29:11.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:12.708: INFO: Number of nodes with available pods: 0 Dec 23 11:29:12.708: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:13.942: INFO: Number of nodes with available pods: 0 Dec 23 11:29:13.942: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:14.733: INFO: Number of nodes with available pods: 0 Dec 23 11:29:14.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:15.704: INFO: Number of nodes with available pods: 0 Dec 23 11:29:15.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:16.767: INFO: Number of nodes with available pods: 0 Dec 23 11:29:16.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:17.703: INFO: Number of nodes with available pods: 0 Dec 23 11:29:17.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:18.714: INFO: Number of nodes with available pods: 0 Dec 23 11:29:18.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:19.703: INFO: Number of nodes with available pods: 0 Dec 23 11:29:19.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:20.849: INFO: Number of nodes with available pods: 0 Dec 23 11:29:20.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:21.709: INFO: Number of nodes with available pods: 0 Dec 23 11:29:21.709: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:22.715: INFO: Number of nodes with available pods: 0 Dec 23 11:29:22.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:23.709: INFO: Number of nodes with available pods: 0 Dec 23 11:29:23.709: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:25.337: INFO: Number of nodes with available pods: 0 Dec 23 11:29:25.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:25.836: INFO: Number of nodes with available pods: 0 Dec 23 11:29:25.836: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:26.796: INFO: Number of nodes with available pods: 0 Dec 23 11:29:26.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:27.714: INFO: Number of nodes with available pods: 0 Dec 23 11:29:27.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 23 11:29:28.702: INFO: Number of nodes with available pods: 1 Dec 23 11:29:28.702: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-z7584, will wait for the garbage collector to delete the pods Dec 23 11:29:28.851: INFO: Deleting DaemonSet.extensions daemon-set took: 93.116713ms Dec 23 11:29:28.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.849637ms Dec 23 11:29:42.832: INFO: Number of nodes with available pods: 0 Dec 23 11:29:42.832: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 11:29:42.849: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-z7584/daemonsets","resourceVersion":"15783533"},"items":null} Dec 23 11:29:42.855: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-z7584/pods","resourceVersion":"15783533"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:29:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-z7584" for this suite. Dec 23 11:29:48.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:29:49.013: INFO: namespace: e2e-tests-daemonsets-z7584, resource: bindings, ignored listing per whitelist Dec 23 11:29:49.040: INFO: namespace e2e-tests-daemonsets-z7584 deletion completed in 6.166334739s • [SLOW TEST:49.268 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:29:49.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:29:49.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-rzrjp" to be "success or failure" Dec 23 11:29:49.281: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.316428ms Dec 23 11:29:51.303: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031673526s Dec 23 11:29:53.320: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048617332s Dec 23 11:29:56.050: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778215386s Dec 23 11:29:58.066: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.79453398s Dec 23 11:30:00.082: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.810529811s STEP: Saw pod success Dec 23 11:30:00.082: INFO: Pod "downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:30:00.086: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:30:01.195: INFO: Waiting for pod downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:30:01.206: INFO: Pod downwardapi-volume-877a4a3e-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:30:01.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rzrjp" for this suite. Dec 23 11:30:07.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:30:07.470: INFO: namespace: e2e-tests-downward-api-rzrjp, resource: bindings, ignored listing per whitelist Dec 23 11:30:07.488: INFO: namespace e2e-tests-downward-api-rzrjp deletion completed in 6.273304626s • [SLOW TEST:18.448 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:30:07.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 11:30:07.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tjq62' Dec 23 11:30:09.584: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 23 11:30:09.584: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Dec 23 11:30:13.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tjq62' Dec 23 11:30:13.985: INFO: stderr: "" Dec 23 11:30:13.985: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:30:13.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tjq62" for this suite. Dec 23 11:30:20.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:30:20.165: INFO: namespace: e2e-tests-kubectl-tjq62, resource: bindings, ignored listing per whitelist Dec 23 11:30:20.181: INFO: namespace e2e-tests-kubectl-tjq62 deletion completed in 6.169020144s • [SLOW TEST:12.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:30:20.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:30:20.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-krrlv" to be "success or failure" Dec 23 11:30:20.326: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364416ms Dec 23 11:30:22.340: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024851369s Dec 23 11:30:24.356: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040390955s Dec 23 11:30:26.485: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170009684s Dec 23 11:30:28.519: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203979441s Dec 23 11:30:30.558: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242666821s STEP: Saw pod success Dec 23 11:30:30.558: INFO: Pod "downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:30:30.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:30:30.923: INFO: Waiting for pod downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:30:30.935: INFO: Pod downwardapi-volume-99fc636f-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:30:30.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-krrlv" for this suite. Dec 23 11:30:37.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:30:37.130: INFO: namespace: e2e-tests-downward-api-krrlv, resource: bindings, ignored listing per whitelist Dec 23 11:30:37.146: INFO: namespace e2e-tests-downward-api-krrlv deletion completed in 6.200816196s • [SLOW TEST:16.965 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:30:37.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 23 11:30:37.354: INFO: Waiting up to 5m0s for pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-rlr7b" to be "success or failure" Dec 23 11:30:37.401: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.880729ms Dec 23 11:30:39.411: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056367567s Dec 23 11:30:41.433: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078750059s Dec 23 11:30:43.447: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092601675s Dec 23 11:30:45.467: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112466903s Dec 23 11:30:47.480: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125394147s STEP: Saw pod success Dec 23 11:30:47.480: INFO: Pod "pod-a424ddb0-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:30:47.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a424ddb0-2577-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:30:48.087: INFO: Waiting for pod pod-a424ddb0-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:30:48.118: INFO: Pod pod-a424ddb0-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:30:48.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rlr7b" for this suite. Dec 23 11:30:54.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:30:54.333: INFO: namespace: e2e-tests-emptydir-rlr7b, resource: bindings, ignored listing per whitelist Dec 23 11:30:54.432: INFO: namespace e2e-tests-emptydir-rlr7b deletion completed in 6.262237327s • [SLOW TEST:17.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:30:54.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:30:54.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 23 11:30:54.935: INFO: stderr: "" Dec 23 11:30:54.935: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:30:54.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h8pmz" for this suite. Dec 23 11:31:00.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:31:01.073: INFO: namespace: e2e-tests-kubectl-h8pmz, resource: bindings, ignored listing per whitelist Dec 23 11:31:01.141: INFO: namespace e2e-tests-kubectl-h8pmz deletion completed in 6.187542461s • [SLOW TEST:6.708 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:31:01.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:32:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-x5p8t" for this suite. Dec 23 11:32:25.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:32:25.460: INFO: namespace: e2e-tests-container-probe-x5p8t, resource: bindings, ignored listing per whitelist Dec 23 11:32:25.530: INFO: namespace e2e-tests-container-probe-x5p8t deletion completed in 24.221121065s • [SLOW TEST:84.389 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:32:25.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Dec 23 11:32:25.750: INFO: Waiting up to 5m0s for pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005" in namespace "e2e-tests-containers-4td2n" to be "success or failure" Dec 23 11:32:25.779: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.223732ms Dec 23 11:32:27.796: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045362313s Dec 23 11:32:29.816: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065243376s Dec 23 11:32:31.892: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141751289s Dec 23 11:32:34.227: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476276389s Dec 23 11:32:36.258: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507810706s STEP: Saw pod success Dec 23 11:32:36.259: INFO: Pod "client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:32:36.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:32:36.643: INFO: Waiting for pod client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005 to disappear Dec 23 11:32:36.661: INFO: Pod client-containers-e4b17a21-2577-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:32:36.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4td2n" for this suite. Dec 23 11:32:42.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:32:42.935: INFO: namespace: e2e-tests-containers-4td2n, resource: bindings, ignored listing per whitelist Dec 23 11:32:43.034: INFO: namespace e2e-tests-containers-4td2n deletion completed in 6.343457831s • [SLOW TEST:17.504 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:32:43.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-xwvgr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-xwvgr to expose endpoints map[] Dec 23 11:32:43.385: INFO: Get endpoints failed (89.823256ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 23 11:32:44.399: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-xwvgr exposes endpoints map[] (1.103757141s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-xwvgr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-xwvgr to expose endpoints map[pod1:[100]] Dec 23 11:32:49.067: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.618220041s elapsed, will retry) Dec 23 11:32:54.512: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-xwvgr exposes endpoints map[pod1:[100]] (10.063387775s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-xwvgr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-xwvgr to expose endpoints map[pod1:[100] pod2:[101]] Dec 23 11:32:59.910: INFO: Unexpected endpoints: found map[efe6866e-2577-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (5.3098967s elapsed, will retry) Dec 23 11:33:02.979: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-xwvgr exposes endpoints map[pod2:[101] pod1:[100]] (8.378299488s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-xwvgr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-xwvgr to expose endpoints map[pod2:[101]] Dec 23 11:33:04.059: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-xwvgr exposes endpoints map[pod2:[101]] (1.071499525s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-xwvgr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-xwvgr to expose endpoints map[] Dec 23 11:33:05.401: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-xwvgr exposes endpoints map[] (1.320776526s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:33:07.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-xwvgr" for this suite. Dec 23 11:33:31.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:33:31.641: INFO: namespace: e2e-tests-services-xwvgr, resource: bindings, ignored listing per whitelist Dec 23 11:33:31.751: INFO: namespace e2e-tests-services-xwvgr deletion completed in 24.362163309s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.714 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:33:31.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 23 11:33:32.013: INFO: Waiting up to 5m0s for pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-2bsjb" to be "success or failure" Dec 23 11:33:32.020: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704128ms Dec 23 11:33:34.040: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026471495s Dec 23 11:33:36.051: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037261913s Dec 23 11:33:38.151: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137569992s Dec 23 11:33:40.177: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163345492s Dec 23 11:33:42.206: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192804261s STEP: Saw pod success Dec 23 11:33:42.207: INFO: Pod "downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:33:42.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005 container dapi-container: STEP: delete the pod Dec 23 11:33:42.445: INFO: Waiting for pod downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005 to disappear Dec 23 11:33:42.462: INFO: Pod downward-api-0c3dd2ee-2578-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:33:42.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2bsjb" for this suite. Dec 23 11:33:48.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:33:48.863: INFO: namespace: e2e-tests-downward-api-2bsjb, resource: bindings, ignored listing per whitelist Dec 23 11:33:48.993: INFO: namespace e2e-tests-downward-api-2bsjb deletion completed in 6.51607148s • [SLOW TEST:17.242 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:33:48.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:33:59.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-j8dhc" for this suite. Dec 23 11:34:45.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:34:45.529: INFO: namespace: e2e-tests-kubelet-test-j8dhc, resource: bindings, ignored listing per whitelist Dec 23 11:34:45.584: INFO: namespace e2e-tests-kubelet-test-j8dhc deletion completed in 46.208874237s • [SLOW TEST:56.590 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:34:45.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 11:34:58.658: INFO: Successfully updated pod "pod-update-3852539b-2578-11ea-a9d2-0242ac110005" STEP: verifying the updated pod is in kubernetes Dec 23 11:34:58.793: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:34:58.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qhs7x" for this suite. Dec 23 11:35:22.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:35:23.002: INFO: namespace: e2e-tests-pods-qhs7x, resource: bindings, ignored listing per whitelist Dec 23 11:35:23.042: INFO: namespace e2e-tests-pods-qhs7x deletion completed in 24.240715644s • [SLOW TEST:37.457 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:35:23.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4e8ae567-2578-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 11:35:23.248: INFO: Waiting up to 5m0s for pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-vn2pc" to be "success or failure" Dec 23 11:35:23.273: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.572432ms Dec 23 11:35:25.313: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064704547s Dec 23 11:35:27.328: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079696275s Dec 23 11:35:29.970: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.721099668s Dec 23 11:35:32.131: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.882848132s Dec 23 11:35:34.160: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.911145197s STEP: Saw pod success Dec 23 11:35:34.160: INFO: Pod "pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:35:34.196: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005 container secret-env-test: STEP: delete the pod Dec 23 11:35:34.393: INFO: Waiting for pod pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005 to disappear Dec 23 11:35:34.465: INFO: Pod pod-secrets-4e8c2f1d-2578-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:35:34.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vn2pc" for this suite. Dec 23 11:35:40.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:35:40.829: INFO: namespace: e2e-tests-secrets-vn2pc, resource: bindings, ignored listing per whitelist Dec 23 11:35:40.847: INFO: namespace e2e-tests-secrets-vn2pc deletion completed in 6.367846792s • [SLOW TEST:17.805 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:35:40.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pfqlh Dec 23 11:35:51.458: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pfqlh STEP: checking the pod's current state and verifying that restartCount is present Dec 23 11:35:51.463: INFO: Initial restart count of pod liveness-http is 0 Dec 23 11:36:11.697: INFO: Restart count of pod e2e-tests-container-probe-pfqlh/liveness-http is now 1 (20.234532084s elapsed) Dec 23 11:36:32.280: INFO: Restart count of pod e2e-tests-container-probe-pfqlh/liveness-http is now 2 (40.816925262s elapsed) Dec 23 11:36:51.133: INFO: Restart count of pod e2e-tests-container-probe-pfqlh/liveness-http is now 3 (59.67071763s elapsed) Dec 23 11:37:11.376: INFO: Restart count of pod e2e-tests-container-probe-pfqlh/liveness-http is now 4 (1m19.913056341s elapsed) Dec 23 11:38:12.711: INFO: Restart count of pod e2e-tests-container-probe-pfqlh/liveness-http is now 5 (2m21.248706749s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:38:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pfqlh" for this suite. Dec 23 11:38:19.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:38:19.204: INFO: namespace: e2e-tests-container-probe-pfqlh, resource: bindings, ignored listing per whitelist Dec 23 11:38:19.270: INFO: namespace e2e-tests-container-probe-pfqlh deletion completed in 6.381873386s • [SLOW TEST:158.422 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:38:19.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:38:19.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-glvbs" to be "success or failure" Dec 23 11:38:19.571: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.851059ms Dec 23 11:38:21.852: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345002323s Dec 23 11:38:23.899: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391625439s Dec 23 11:38:25.944: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437307832s Dec 23 11:38:28.253: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.745693091s Dec 23 11:38:30.280: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.773155842s STEP: Saw pod success Dec 23 11:38:30.280: INFO: Pod "downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:38:30.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:38:30.684: INFO: Waiting for pod downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005 to disappear Dec 23 11:38:30.756: INFO: Pod downwardapi-volume-b79a6d14-2578-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:38:30.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-glvbs" for this suite. Dec 23 11:38:36.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:38:37.079: INFO: namespace: e2e-tests-projected-glvbs, resource: bindings, ignored listing per whitelist Dec 23 11:38:37.122: INFO: namespace e2e-tests-projected-glvbs deletion completed in 6.341877642s • [SLOW TEST:17.852 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:38:37.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:39:07.345: INFO: Container started at 2019-12-23 11:38:44 +0000 UTC, pod became ready at 2019-12-23 11:39:05 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:39:07.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ml62c" for this suite. Dec 23 11:39:31.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:39:31.554: INFO: namespace: e2e-tests-container-probe-ml62c, resource: bindings, ignored listing per whitelist Dec 23 11:39:31.576: INFO: namespace e2e-tests-container-probe-ml62c deletion completed in 24.223111274s • [SLOW TEST:54.453 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:39:31.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 23 11:39:31.802: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:39:54.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-tjf2r" for this suite. Dec 23 11:40:20.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:40:20.837: INFO: namespace: e2e-tests-init-container-tjf2r, resource: bindings, ignored listing per whitelist Dec 23 11:40:20.965: INFO: namespace e2e-tests-init-container-tjf2r deletion completed in 26.304585182s • [SLOW TEST:49.388 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:40:20.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-001ac9e1-2579-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 11:40:21.144: INFO: Waiting up to 5m0s for pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-m8m6r" to be "success or failure" Dec 23 11:40:21.151: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.846878ms Dec 23 11:40:23.324: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179841766s Dec 23 11:40:25.338: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194036291s Dec 23 11:40:27.892: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.747690231s Dec 23 11:40:29.907: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.76319726s Dec 23 11:40:31.942: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798295283s STEP: Saw pod success Dec 23 11:40:31.943: INFO: Pod "pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:40:31.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 23 11:40:32.029: INFO: Waiting for pod pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:40:32.040: INFO: Pod pod-secrets-001bd2f4-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:40:32.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m8m6r" for this suite. Dec 23 11:40:38.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:40:38.547: INFO: namespace: e2e-tests-secrets-m8m6r, resource: bindings, ignored listing per whitelist Dec 23 11:40:38.577: INFO: namespace e2e-tests-secrets-m8m6r deletion completed in 6.519186878s • [SLOW TEST:17.612 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:40:38.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Dec 23 11:40:38.972: INFO: Waiting up to 5m0s for pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-m2fvz" to be "success or failure" Dec 23 11:40:38.981: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.15646ms Dec 23 11:40:40.999: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027302375s Dec 23 11:40:43.219: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247414644s Dec 23 11:40:45.236: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263666115s Dec 23 11:40:47.256: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283893434s Dec 23 11:40:50.108: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.135763246s STEP: Saw pod success Dec 23 11:40:50.108: INFO: Pod "pod-0abb4a0d-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:40:50.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0abb4a0d-2579-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:40:50.475: INFO: Waiting for pod pod-0abb4a0d-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:40:50.492: INFO: Pod pod-0abb4a0d-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:40:50.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m2fvz" for this suite. Dec 23 11:40:56.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:40:56.652: INFO: namespace: e2e-tests-emptydir-m2fvz, resource: bindings, ignored listing per whitelist Dec 23 11:40:56.722: INFO: namespace e2e-tests-emptydir-m2fvz deletion completed in 6.210627226s • [SLOW TEST:18.144 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:40:56.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-157278d2-2579-11ea-a9d2-0242ac110005 STEP: Creating a pod to test consume secrets Dec 23 11:40:56.948: INFO: Waiting up to 5m0s for pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-795m5" to be "success or failure" Dec 23 11:40:56.963: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.210205ms Dec 23 11:40:59.007: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058623607s Dec 23 11:41:01.038: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089658863s Dec 23 11:41:03.073: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124705576s Dec 23 11:41:05.084: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135822145s Dec 23 11:41:07.097: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148850371s STEP: Saw pod success Dec 23 11:41:07.097: INFO: Pod "pod-secrets-15737322-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:41:07.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-15737322-2579-11ea-a9d2-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 23 11:41:07.777: INFO: Waiting for pod pod-secrets-15737322-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:41:07.936: INFO: Pod pod-secrets-15737322-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:41:07.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-795m5" for this suite. Dec 23 11:41:16.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:41:16.548: INFO: namespace: e2e-tests-secrets-795m5, resource: bindings, ignored listing per whitelist Dec 23 11:41:16.637: INFO: namespace e2e-tests-secrets-795m5 deletion completed in 8.307304219s • [SLOW TEST:19.915 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:41:16.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-sd2cw I1223 11:41:16.815524 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-sd2cw, replica count: 1 I1223 11:41:17.867232 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:18.868949 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:19.869937 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:20.871074 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:21.872077 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:22.872621 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:23.873879 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:24.874803 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:25.875801 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:26.877421 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:27.878716 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 11:41:28.879552 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 23 11:41:29.117: INFO: Created: latency-svc-b8r8q Dec 23 11:41:29.146: INFO: Got endpoints: latency-svc-b8r8q [166.354839ms] Dec 23 11:41:29.208: INFO: Created: latency-svc-2cpsd Dec 23 11:41:29.256: INFO: Got endpoints: latency-svc-2cpsd [109.284338ms] Dec 23 11:41:29.307: INFO: Created: latency-svc-mfc59 Dec 23 11:41:29.325: INFO: Got endpoints: latency-svc-mfc59 [177.775453ms] Dec 23 11:41:29.485: INFO: Created: latency-svc-j7j56 Dec 23 11:41:29.505: INFO: Got endpoints: latency-svc-j7j56 [357.405679ms] Dec 23 11:41:29.662: INFO: Created: latency-svc-x7fhw Dec 23 11:41:29.682: INFO: Got endpoints: latency-svc-x7fhw [535.436944ms] Dec 23 11:41:29.884: INFO: Created: latency-svc-clxgm Dec 23 11:41:29.963: INFO: Got endpoints: latency-svc-clxgm [814.520074ms] Dec 23 11:41:30.137: INFO: Created: latency-svc-w2vkw Dec 23 11:41:30.152: INFO: Got endpoints: latency-svc-w2vkw [1.005154964s] Dec 23 11:41:30.412: INFO: Created: latency-svc-8l5dx Dec 23 11:41:30.437: INFO: Got endpoints: latency-svc-8l5dx [1.28975565s] Dec 23 11:41:30.641: INFO: Created: latency-svc-tr57j Dec 23 11:41:30.655: INFO: Got endpoints: latency-svc-tr57j [1.507249106s] Dec 23 11:41:30.699: INFO: Created: latency-svc-x8kkx Dec 23 11:41:30.899: INFO: Got endpoints: latency-svc-x8kkx [1.751396319s] Dec 23 11:41:30.931: INFO: Created: latency-svc-r8slt Dec 23 11:41:30.949: INFO: Got endpoints: latency-svc-r8slt [1.801966425s] Dec 23 11:41:30.990: INFO: Created: latency-svc-62gg5 Dec 23 11:41:31.109: INFO: Got endpoints: latency-svc-62gg5 [1.961324633s] Dec 23 11:41:31.132: INFO: Created: latency-svc-g64ht Dec 23 11:41:31.152: INFO: Got endpoints: latency-svc-g64ht [2.004045476s] Dec 23 11:41:31.320: INFO: Created: latency-svc-j4ml6 Dec 23 11:41:31.336: INFO: Got endpoints: latency-svc-j4ml6 [2.187936132s] Dec 23 11:41:31.649: INFO: Created: latency-svc-j9qw2 Dec 23 11:41:31.695: INFO: Got endpoints: latency-svc-j9qw2 [2.547022877s] Dec 23 11:41:31.991: INFO: Created: latency-svc-5gjsd Dec 23 11:41:32.021: INFO: Got endpoints: latency-svc-5gjsd [2.873016166s] Dec 23 11:41:32.422: INFO: Created: latency-svc-kxr6v Dec 23 11:41:32.423: INFO: Got endpoints: latency-svc-kxr6v [3.166562541s] Dec 23 11:41:32.682: INFO: Created: latency-svc-wfkgc Dec 23 11:41:32.753: INFO: Got endpoints: latency-svc-wfkgc [3.427285603s] Dec 23 11:41:33.040: INFO: Created: latency-svc-kxdtd Dec 23 11:41:33.067: INFO: Got endpoints: latency-svc-kxdtd [3.562542251s] Dec 23 11:41:33.226: INFO: Created: latency-svc-hd2gs Dec 23 11:41:33.300: INFO: Got endpoints: latency-svc-hd2gs [3.61745306s] Dec 23 11:41:33.432: INFO: Created: latency-svc-4m5rl Dec 23 11:41:33.445: INFO: Got endpoints: latency-svc-4m5rl [3.482232252s] Dec 23 11:41:33.501: INFO: Created: latency-svc-lq89h Dec 23 11:41:33.618: INFO: Got endpoints: latency-svc-lq89h [3.466311939s] Dec 23 11:41:33.646: INFO: Created: latency-svc-9r7kf Dec 23 11:41:33.670: INFO: Got endpoints: latency-svc-9r7kf [3.232147745s] Dec 23 11:41:33.888: INFO: Created: latency-svc-zx7t9 Dec 23 11:41:33.932: INFO: Got endpoints: latency-svc-zx7t9 [3.27656636s] Dec 23 11:41:34.107: INFO: Created: latency-svc-2g5xb Dec 23 11:41:34.140: INFO: Got endpoints: latency-svc-2g5xb [3.240194748s] Dec 23 11:41:34.292: INFO: Created: latency-svc-xfnpg Dec 23 11:41:34.312: INFO: Got endpoints: latency-svc-xfnpg [3.362235094s] Dec 23 11:41:34.603: INFO: Created: latency-svc-9lbtj Dec 23 11:41:35.066: INFO: Got endpoints: latency-svc-9lbtj [3.95639653s] Dec 23 11:41:35.117: INFO: Created: latency-svc-wpt6g Dec 23 11:41:35.346: INFO: Got endpoints: latency-svc-wpt6g [4.193533807s] Dec 23 11:41:35.376: INFO: Created: latency-svc-8m2ss Dec 23 11:41:35.523: INFO: Got endpoints: latency-svc-8m2ss [4.187086703s] Dec 23 11:41:35.565: INFO: Created: latency-svc-hxqzp Dec 23 11:41:35.573: INFO: Got endpoints: latency-svc-hxqzp [3.877863571s] Dec 23 11:41:35.689: INFO: Created: latency-svc-59gxm Dec 23 11:41:35.715: INFO: Got endpoints: latency-svc-59gxm [3.693265202s] Dec 23 11:41:35.761: INFO: Created: latency-svc-8mrsw Dec 23 11:41:35.861: INFO: Got endpoints: latency-svc-8mrsw [3.438649557s] Dec 23 11:41:35.945: INFO: Created: latency-svc-5nrw8 Dec 23 11:41:36.089: INFO: Got endpoints: latency-svc-5nrw8 [3.336155796s] Dec 23 11:41:36.109: INFO: Created: latency-svc-trtsq Dec 23 11:41:36.355: INFO: Got endpoints: latency-svc-trtsq [3.287739359s] Dec 23 11:41:36.382: INFO: Created: latency-svc-kzz8p Dec 23 11:41:36.428: INFO: Got endpoints: latency-svc-kzz8p [3.127557703s] Dec 23 11:41:36.654: INFO: Created: latency-svc-qxqrq Dec 23 11:41:36.679: INFO: Got endpoints: latency-svc-qxqrq [3.233886819s] Dec 23 11:41:36.744: INFO: Created: latency-svc-429j2 Dec 23 11:41:36.877: INFO: Got endpoints: latency-svc-429j2 [3.258285228s] Dec 23 11:41:36.945: INFO: Created: latency-svc-rzpgh Dec 23 11:41:36.958: INFO: Got endpoints: latency-svc-rzpgh [3.287898616s] Dec 23 11:41:37.099: INFO: Created: latency-svc-khvt6 Dec 23 11:41:37.134: INFO: Got endpoints: latency-svc-khvt6 [3.201392407s] Dec 23 11:41:37.255: INFO: Created: latency-svc-qm5b2 Dec 23 11:41:37.278: INFO: Got endpoints: latency-svc-qm5b2 [3.137781829s] Dec 23 11:41:37.333: INFO: Created: latency-svc-hvq4n Dec 23 11:41:37.531: INFO: Got endpoints: latency-svc-hvq4n [3.219134783s] Dec 23 11:41:37.571: INFO: Created: latency-svc-2nkd8 Dec 23 11:41:37.593: INFO: Got endpoints: latency-svc-2nkd8 [2.526531139s] Dec 23 11:41:37.971: INFO: Created: latency-svc-q2sgg Dec 23 11:41:37.982: INFO: Got endpoints: latency-svc-q2sgg [2.636030528s] Dec 23 11:41:38.204: INFO: Created: latency-svc-fsjtc Dec 23 11:41:38.255: INFO: Got endpoints: latency-svc-fsjtc [2.731233784s] Dec 23 11:41:38.316: INFO: Created: latency-svc-s5426 Dec 23 11:41:38.466: INFO: Got endpoints: latency-svc-s5426 [2.892137901s] Dec 23 11:41:38.532: INFO: Created: latency-svc-xrhlk Dec 23 11:41:38.686: INFO: Got endpoints: latency-svc-xrhlk [2.9708725s] Dec 23 11:41:38.709: INFO: Created: latency-svc-jgf79 Dec 23 11:41:38.721: INFO: Got endpoints: latency-svc-jgf79 [2.858940254s] Dec 23 11:41:38.942: INFO: Created: latency-svc-hlgz9 Dec 23 11:41:39.013: INFO: Got endpoints: latency-svc-hlgz9 [2.923415764s] Dec 23 11:41:39.029: INFO: Created: latency-svc-qrqsq Dec 23 11:41:39.127: INFO: Got endpoints: latency-svc-qrqsq [2.771024306s] Dec 23 11:41:39.150: INFO: Created: latency-svc-sbfd2 Dec 23 11:41:39.171: INFO: Got endpoints: latency-svc-sbfd2 [2.742769159s] Dec 23 11:41:39.235: INFO: Created: latency-svc-9swql Dec 23 11:41:39.346: INFO: Got endpoints: latency-svc-9swql [2.666922662s] Dec 23 11:41:39.387: INFO: Created: latency-svc-pc24l Dec 23 11:41:39.402: INFO: Got endpoints: latency-svc-pc24l [2.524232103s] Dec 23 11:41:39.838: INFO: Created: latency-svc-wnz7q Dec 23 11:41:39.896: INFO: Got endpoints: latency-svc-wnz7q [2.937515645s] Dec 23 11:41:40.799: INFO: Created: latency-svc-l59kz Dec 23 11:41:40.805: INFO: Got endpoints: latency-svc-l59kz [3.670982732s] Dec 23 11:41:40.975: INFO: Created: latency-svc-2b478 Dec 23 11:41:40.992: INFO: Got endpoints: latency-svc-2b478 [3.713839509s] Dec 23 11:41:41.109: INFO: Created: latency-svc-7fnlm Dec 23 11:41:41.144: INFO: Got endpoints: latency-svc-7fnlm [3.612361725s] Dec 23 11:41:41.334: INFO: Created: latency-svc-2psct Dec 23 11:41:41.355: INFO: Got endpoints: latency-svc-2psct [3.762193465s] Dec 23 11:41:41.752: INFO: Created: latency-svc-tp6vn Dec 23 11:41:41.773: INFO: Got endpoints: latency-svc-tp6vn [3.790754541s] Dec 23 11:41:42.309: INFO: Created: latency-svc-66xcp Dec 23 11:41:42.409: INFO: Got endpoints: latency-svc-66xcp [4.154155672s] Dec 23 11:41:42.458: INFO: Created: latency-svc-ltbm9 Dec 23 11:41:42.497: INFO: Got endpoints: latency-svc-ltbm9 [4.030950826s] Dec 23 11:41:42.699: INFO: Created: latency-svc-84mjv Dec 23 11:41:42.758: INFO: Got endpoints: latency-svc-84mjv [4.071998041s] Dec 23 11:41:42.945: INFO: Created: latency-svc-4l2s8 Dec 23 11:41:43.069: INFO: Got endpoints: latency-svc-4l2s8 [4.347671123s] Dec 23 11:41:43.118: INFO: Created: latency-svc-jk8lj Dec 23 11:41:43.153: INFO: Got endpoints: latency-svc-jk8lj [4.139583759s] Dec 23 11:41:43.340: INFO: Created: latency-svc-h6gmr Dec 23 11:41:43.427: INFO: Got endpoints: latency-svc-h6gmr [4.300041339s] Dec 23 11:41:43.439: INFO: Created: latency-svc-flr59 Dec 23 11:41:43.454: INFO: Got endpoints: latency-svc-flr59 [4.282672922s] Dec 23 11:41:43.513: INFO: Created: latency-svc-lpplm Dec 23 11:41:43.617: INFO: Got endpoints: latency-svc-lpplm [4.26975188s] Dec 23 11:41:43.692: INFO: Created: latency-svc-g45sm Dec 23 11:41:43.845: INFO: Got endpoints: latency-svc-g45sm [4.442491091s] Dec 23 11:41:43.894: INFO: Created: latency-svc-v9g7h Dec 23 11:41:43.920: INFO: Got endpoints: latency-svc-v9g7h [4.023579101s] Dec 23 11:41:44.075: INFO: Created: latency-svc-nxmh4 Dec 23 11:41:44.121: INFO: Got endpoints: latency-svc-nxmh4 [3.315307876s] Dec 23 11:41:44.353: INFO: Created: latency-svc-5c77f Dec 23 11:41:44.380: INFO: Got endpoints: latency-svc-5c77f [3.387869554s] Dec 23 11:41:44.521: INFO: Created: latency-svc-nfwsc Dec 23 11:41:44.598: INFO: Got endpoints: latency-svc-nfwsc [3.454055164s] Dec 23 11:41:44.613: INFO: Created: latency-svc-l5hfz Dec 23 11:41:44.673: INFO: Got endpoints: latency-svc-l5hfz [3.317742821s] Dec 23 11:41:44.864: INFO: Created: latency-svc-dz684 Dec 23 11:41:44.872: INFO: Got endpoints: latency-svc-dz684 [3.097998097s] Dec 23 11:41:44.967: INFO: Created: latency-svc-tkdnh Dec 23 11:41:45.060: INFO: Got endpoints: latency-svc-tkdnh [2.649363475s] Dec 23 11:41:45.067: INFO: Created: latency-svc-x928j Dec 23 11:41:45.076: INFO: Got endpoints: latency-svc-x928j [2.57806436s] Dec 23 11:41:45.150: INFO: Created: latency-svc-mgp7d Dec 23 11:41:45.210: INFO: Got endpoints: latency-svc-mgp7d [2.451646118s] Dec 23 11:41:45.241: INFO: Created: latency-svc-z4cs5 Dec 23 11:41:45.243: INFO: Got endpoints: latency-svc-z4cs5 [2.173461885s] Dec 23 11:41:45.306: INFO: Created: latency-svc-f9b2l Dec 23 11:41:45.357: INFO: Got endpoints: latency-svc-f9b2l [2.20370753s] Dec 23 11:41:45.388: INFO: Created: latency-svc-2hgkb Dec 23 11:41:45.543: INFO: Got endpoints: latency-svc-2hgkb [2.114828579s] Dec 23 11:41:45.565: INFO: Created: latency-svc-m66ks Dec 23 11:41:45.594: INFO: Got endpoints: latency-svc-m66ks [2.139743116s] Dec 23 11:41:45.737: INFO: Created: latency-svc-rxzgw Dec 23 11:41:45.752: INFO: Got endpoints: latency-svc-rxzgw [2.134954445s] Dec 23 11:41:46.011: INFO: Created: latency-svc-dvmsd Dec 23 11:41:46.011: INFO: Got endpoints: latency-svc-dvmsd [2.165792199s] Dec 23 11:41:46.213: INFO: Created: latency-svc-zqglp Dec 23 11:41:46.245: INFO: Got endpoints: latency-svc-zqglp [2.324450239s] Dec 23 11:41:46.377: INFO: Created: latency-svc-7t2nn Dec 23 11:41:46.406: INFO: Got endpoints: latency-svc-7t2nn [2.285198398s] Dec 23 11:41:46.453: INFO: Created: latency-svc-dj58f Dec 23 11:41:46.536: INFO: Got endpoints: latency-svc-dj58f [2.155788253s] Dec 23 11:41:46.632: INFO: Created: latency-svc-m96j4 Dec 23 11:41:46.777: INFO: Got endpoints: latency-svc-m96j4 [2.178104853s] Dec 23 11:41:46.869: INFO: Created: latency-svc-4bg92 Dec 23 11:41:46.963: INFO: Got endpoints: latency-svc-4bg92 [2.289760116s] Dec 23 11:41:46.992: INFO: Created: latency-svc-jj7nr Dec 23 11:41:47.006: INFO: Got endpoints: latency-svc-jj7nr [2.133519568s] Dec 23 11:41:47.151: INFO: Created: latency-svc-gfbvx Dec 23 11:41:47.159: INFO: Got endpoints: latency-svc-gfbvx [2.099464229s] Dec 23 11:41:47.230: INFO: Created: latency-svc-hrfms Dec 23 11:41:47.314: INFO: Got endpoints: latency-svc-hrfms [2.238303094s] Dec 23 11:41:47.339: INFO: Created: latency-svc-24ggv Dec 23 11:41:47.340: INFO: Got endpoints: latency-svc-24ggv [2.129535684s] Dec 23 11:41:47.491: INFO: Created: latency-svc-x266q Dec 23 11:41:47.654: INFO: Got endpoints: latency-svc-x266q [2.410765478s] Dec 23 11:41:47.672: INFO: Created: latency-svc-tk82j Dec 23 11:41:47.683: INFO: Got endpoints: latency-svc-tk82j [2.32564308s] Dec 23 11:41:47.742: INFO: Created: latency-svc-8n5gj Dec 23 11:41:47.840: INFO: Got endpoints: latency-svc-8n5gj [2.296823255s] Dec 23 11:41:47.876: INFO: Created: latency-svc-9sgsg Dec 23 11:41:47.919: INFO: Got endpoints: latency-svc-9sgsg [2.324557734s] Dec 23 11:41:47.935: INFO: Created: latency-svc-8nbmx Dec 23 11:41:48.028: INFO: Got endpoints: latency-svc-8nbmx [2.27565208s] Dec 23 11:41:48.065: INFO: Created: latency-svc-s5mxk Dec 23 11:41:48.081: INFO: Got endpoints: latency-svc-s5mxk [2.069223021s] Dec 23 11:41:48.406: INFO: Created: latency-svc-httsx Dec 23 11:41:48.642: INFO: Got endpoints: latency-svc-httsx [2.396048274s] Dec 23 11:41:48.914: INFO: Created: latency-svc-6g4sg Dec 23 11:41:49.179: INFO: Got endpoints: latency-svc-6g4sg [2.772109805s] Dec 23 11:41:49.302: INFO: Created: latency-svc-2fsq8 Dec 23 11:41:49.384: INFO: Created: latency-svc-nb58d Dec 23 11:41:49.420: INFO: Got endpoints: latency-svc-2fsq8 [2.883451547s] Dec 23 11:41:49.504: INFO: Got endpoints: latency-svc-nb58d [2.726392547s] Dec 23 11:41:49.527: INFO: Created: latency-svc-5x4pd Dec 23 11:41:49.544: INFO: Got endpoints: latency-svc-5x4pd [2.579895757s] Dec 23 11:41:49.728: INFO: Created: latency-svc-ff4zr Dec 23 11:41:49.752: INFO: Got endpoints: latency-svc-ff4zr [2.745821388s] Dec 23 11:41:49.973: INFO: Created: latency-svc-qt7tw Dec 23 11:41:50.106: INFO: Got endpoints: latency-svc-qt7tw [2.946170682s] Dec 23 11:41:50.122: INFO: Created: latency-svc-zn4bn Dec 23 11:41:50.160: INFO: Got endpoints: latency-svc-zn4bn [2.844807671s] Dec 23 11:41:50.267: INFO: Created: latency-svc-lgt8r Dec 23 11:41:50.284: INFO: Got endpoints: latency-svc-lgt8r [2.943350407s] Dec 23 11:41:50.399: INFO: Created: latency-svc-89pvw Dec 23 11:41:50.420: INFO: Got endpoints: latency-svc-89pvw [2.765198104s] Dec 23 11:41:50.478: INFO: Created: latency-svc-vh29p Dec 23 11:41:50.592: INFO: Got endpoints: latency-svc-vh29p [2.907707364s] Dec 23 11:41:50.645: INFO: Created: latency-svc-75vl5 Dec 23 11:41:50.666: INFO: Got endpoints: latency-svc-75vl5 [2.825774055s] Dec 23 11:41:50.885: INFO: Created: latency-svc-b8clb Dec 23 11:41:51.029: INFO: Got endpoints: latency-svc-b8clb [3.109114045s] Dec 23 11:41:51.086: INFO: Created: latency-svc-hhp59 Dec 23 11:41:51.093: INFO: Got endpoints: latency-svc-hhp59 [3.064351654s] Dec 23 11:41:51.350: INFO: Created: latency-svc-5cmf8 Dec 23 11:41:51.389: INFO: Got endpoints: latency-svc-5cmf8 [3.308075335s] Dec 23 11:41:51.518: INFO: Created: latency-svc-c2kwk Dec 23 11:41:51.542: INFO: Got endpoints: latency-svc-c2kwk [2.899628907s] Dec 23 11:41:51.620: INFO: Created: latency-svc-jf26m Dec 23 11:41:51.738: INFO: Created: latency-svc-nhfn8 Dec 23 11:41:51.740: INFO: Got endpoints: latency-svc-jf26m [2.561461892s] Dec 23 11:41:51.748: INFO: Got endpoints: latency-svc-nhfn8 [2.32762545s] Dec 23 11:41:51.933: INFO: Created: latency-svc-p259k Dec 23 11:41:51.956: INFO: Got endpoints: latency-svc-p259k [2.451989841s] Dec 23 11:41:52.086: INFO: Created: latency-svc-rjftg Dec 23 11:41:52.126: INFO: Got endpoints: latency-svc-rjftg [2.582126799s] Dec 23 11:41:52.342: INFO: Created: latency-svc-jd2sd Dec 23 11:41:52.367: INFO: Got endpoints: latency-svc-jd2sd [2.614802136s] Dec 23 11:41:52.518: INFO: Created: latency-svc-xwgdh Dec 23 11:41:52.537: INFO: Got endpoints: latency-svc-xwgdh [2.430675249s] Dec 23 11:41:52.690: INFO: Created: latency-svc-rllzk Dec 23 11:41:52.715: INFO: Got endpoints: latency-svc-rllzk [2.555058787s] Dec 23 11:41:52.872: INFO: Created: latency-svc-bn75z Dec 23 11:41:52.897: INFO: Got endpoints: latency-svc-bn75z [2.612343187s] Dec 23 11:41:52.975: INFO: Created: latency-svc-8vxh2 Dec 23 11:41:53.127: INFO: Got endpoints: latency-svc-8vxh2 [2.706046583s] Dec 23 11:41:53.155: INFO: Created: latency-svc-w9dtv Dec 23 11:41:53.185: INFO: Got endpoints: latency-svc-w9dtv [2.592288753s] Dec 23 11:41:53.386: INFO: Created: latency-svc-6hlm6 Dec 23 11:41:53.412: INFO: Got endpoints: latency-svc-6hlm6 [2.745141525s] Dec 23 11:41:53.613: INFO: Created: latency-svc-gpwg6 Dec 23 11:41:53.641: INFO: Got endpoints: latency-svc-gpwg6 [2.611383951s] Dec 23 11:41:53.762: INFO: Created: latency-svc-kkx7n Dec 23 11:41:53.782: INFO: Got endpoints: latency-svc-kkx7n [2.689153405s] Dec 23 11:41:53.951: INFO: Created: latency-svc-8xlp7 Dec 23 11:41:53.970: INFO: Got endpoints: latency-svc-8xlp7 [2.58052451s] Dec 23 11:41:54.047: INFO: Created: latency-svc-sjhc2 Dec 23 11:41:54.172: INFO: Got endpoints: latency-svc-sjhc2 [2.6294871s] Dec 23 11:41:54.256: INFO: Created: latency-svc-jtwfp Dec 23 11:41:54.358: INFO: Got endpoints: latency-svc-jtwfp [2.616992905s] Dec 23 11:41:54.390: INFO: Created: latency-svc-4c9j5 Dec 23 11:41:54.404: INFO: Got endpoints: latency-svc-4c9j5 [2.656046906s] Dec 23 11:41:54.530: INFO: Created: latency-svc-ws44t Dec 23 11:41:54.557: INFO: Got endpoints: latency-svc-ws44t [2.601009427s] Dec 23 11:41:54.643: INFO: Created: latency-svc-m7tch Dec 23 11:41:54.724: INFO: Got endpoints: latency-svc-m7tch [2.597209168s] Dec 23 11:41:54.798: INFO: Created: latency-svc-r6gv5 Dec 23 11:41:54.892: INFO: Got endpoints: latency-svc-r6gv5 [2.52460671s] Dec 23 11:41:54.909: INFO: Created: latency-svc-588cl Dec 23 11:41:54.927: INFO: Got endpoints: latency-svc-588cl [2.389017447s] Dec 23 11:41:54.986: INFO: Created: latency-svc-2dtcz Dec 23 11:41:55.002: INFO: Got endpoints: latency-svc-2dtcz [2.286117693s] Dec 23 11:41:55.133: INFO: Created: latency-svc-4hlwb Dec 23 11:41:55.142: INFO: Got endpoints: latency-svc-4hlwb [2.245003561s] Dec 23 11:41:55.427: INFO: Created: latency-svc-zbskg Dec 23 11:41:55.451: INFO: Got endpoints: latency-svc-zbskg [2.323470604s] Dec 23 11:41:55.652: INFO: Created: latency-svc-hkhw6 Dec 23 11:41:55.672: INFO: Got endpoints: latency-svc-hkhw6 [2.486606936s] Dec 23 11:41:55.873: INFO: Created: latency-svc-pc2rr Dec 23 11:41:55.896: INFO: Got endpoints: latency-svc-pc2rr [2.483703497s] Dec 23 11:41:57.069: INFO: Created: latency-svc-7tbtf Dec 23 11:41:57.162: INFO: Got endpoints: latency-svc-7tbtf [3.520617363s] Dec 23 11:41:57.272: INFO: Created: latency-svc-zk8wt Dec 23 11:41:57.296: INFO: Got endpoints: latency-svc-zk8wt [3.51257644s] Dec 23 11:41:57.461: INFO: Created: latency-svc-cfpzq Dec 23 11:41:57.478: INFO: Got endpoints: latency-svc-cfpzq [3.507605559s] Dec 23 11:41:57.558: INFO: Created: latency-svc-jckwk Dec 23 11:41:57.692: INFO: Got endpoints: latency-svc-jckwk [3.519734368s] Dec 23 11:41:57.721: INFO: Created: latency-svc-hhnjp Dec 23 11:41:57.721: INFO: Got endpoints: latency-svc-hhnjp [3.363046974s] Dec 23 11:41:57.799: INFO: Created: latency-svc-vv5hs Dec 23 11:41:57.900: INFO: Got endpoints: latency-svc-vv5hs [3.495155618s] Dec 23 11:41:57.970: INFO: Created: latency-svc-chfnp Dec 23 11:41:58.074: INFO: Got endpoints: latency-svc-chfnp [3.516668569s] Dec 23 11:41:58.118: INFO: Created: latency-svc-9pjjd Dec 23 11:41:58.144: INFO: Got endpoints: latency-svc-9pjjd [3.418946282s] Dec 23 11:41:58.321: INFO: Created: latency-svc-hchjk Dec 23 11:41:58.332: INFO: Got endpoints: latency-svc-hchjk [3.439907843s] Dec 23 11:41:58.382: INFO: Created: latency-svc-sr4df Dec 23 11:41:58.508: INFO: Got endpoints: latency-svc-sr4df [3.580521174s] Dec 23 11:41:58.549: INFO: Created: latency-svc-xhtwp Dec 23 11:41:58.732: INFO: Got endpoints: latency-svc-xhtwp [3.729092228s] Dec 23 11:41:58.752: INFO: Created: latency-svc-dqzcl Dec 23 11:41:59.021: INFO: Got endpoints: latency-svc-dqzcl [3.878272231s] Dec 23 11:41:59.034: INFO: Created: latency-svc-k6gct Dec 23 11:41:59.044: INFO: Got endpoints: latency-svc-k6gct [3.591796774s] Dec 23 11:41:59.185: INFO: Created: latency-svc-cv7nb Dec 23 11:41:59.198: INFO: Got endpoints: latency-svc-cv7nb [3.524842603s] Dec 23 11:41:59.271: INFO: Created: latency-svc-k5m4k Dec 23 11:41:59.357: INFO: Got endpoints: latency-svc-k5m4k [3.460091549s] Dec 23 11:41:59.386: INFO: Created: latency-svc-87x8t Dec 23 11:41:59.394: INFO: Got endpoints: latency-svc-87x8t [2.232213324s] Dec 23 11:41:59.529: INFO: Created: latency-svc-hvjps Dec 23 11:41:59.556: INFO: Got endpoints: latency-svc-hvjps [2.259990676s] Dec 23 11:41:59.623: INFO: Created: latency-svc-cz6jj Dec 23 11:41:59.715: INFO: Got endpoints: latency-svc-cz6jj [2.236648559s] Dec 23 11:41:59.748: INFO: Created: latency-svc-tn4lg Dec 23 11:41:59.752: INFO: Got endpoints: latency-svc-tn4lg [2.059874534s] Dec 23 11:41:59.813: INFO: Created: latency-svc-lnbmg Dec 23 11:41:59.914: INFO: Got endpoints: latency-svc-lnbmg [2.193073744s] Dec 23 11:41:59.955: INFO: Created: latency-svc-zkqwh Dec 23 11:41:59.980: INFO: Got endpoints: latency-svc-zkqwh [2.080493854s] Dec 23 11:42:00.140: INFO: Created: latency-svc-ld7mn Dec 23 11:42:00.198: INFO: Got endpoints: latency-svc-ld7mn [2.122882504s] Dec 23 11:42:00.224: INFO: Created: latency-svc-qbbvd Dec 23 11:42:00.224: INFO: Got endpoints: latency-svc-qbbvd [2.079979318s] Dec 23 11:42:00.385: INFO: Created: latency-svc-rplkp Dec 23 11:42:00.395: INFO: Got endpoints: latency-svc-rplkp [2.063083921s] Dec 23 11:42:00.584: INFO: Created: latency-svc-pdnrl Dec 23 11:42:00.619: INFO: Got endpoints: latency-svc-pdnrl [2.110774744s] Dec 23 11:42:00.799: INFO: Created: latency-svc-g76qs Dec 23 11:42:00.828: INFO: Got endpoints: latency-svc-g76qs [2.096369236s] Dec 23 11:42:01.005: INFO: Created: latency-svc-tpbjs Dec 23 11:42:01.040: INFO: Got endpoints: latency-svc-tpbjs [2.018904224s] Dec 23 11:42:01.218: INFO: Created: latency-svc-zdsk8 Dec 23 11:42:01.229: INFO: Got endpoints: latency-svc-zdsk8 [2.185526005s] Dec 23 11:42:01.402: INFO: Created: latency-svc-9dkzl Dec 23 11:42:01.417: INFO: Got endpoints: latency-svc-9dkzl [2.218930696s] Dec 23 11:42:01.653: INFO: Created: latency-svc-5cldr Dec 23 11:42:01.684: INFO: Got endpoints: latency-svc-5cldr [2.327444315s] Dec 23 11:42:01.734: INFO: Created: latency-svc-pdfl8 Dec 23 11:42:01.904: INFO: Got endpoints: latency-svc-pdfl8 [2.509686742s] Dec 23 11:42:02.110: INFO: Created: latency-svc-78wg5 Dec 23 11:42:02.151: INFO: Got endpoints: latency-svc-78wg5 [2.594667512s] Dec 23 11:42:02.303: INFO: Created: latency-svc-rd7vf Dec 23 11:42:02.327: INFO: Got endpoints: latency-svc-rd7vf [2.611681361s] Dec 23 11:42:02.499: INFO: Created: latency-svc-8dm7s Dec 23 11:42:02.534: INFO: Got endpoints: latency-svc-8dm7s [2.78119749s] Dec 23 11:42:02.576: INFO: Created: latency-svc-d5lzq Dec 23 11:42:02.620: INFO: Got endpoints: latency-svc-d5lzq [2.704822309s] Dec 23 11:42:02.778: INFO: Created: latency-svc-kqcmj Dec 23 11:42:02.953: INFO: Got endpoints: latency-svc-kqcmj [2.971595755s] Dec 23 11:42:02.996: INFO: Created: latency-svc-ths2v Dec 23 11:42:03.000: INFO: Got endpoints: latency-svc-ths2v [2.802154306s] Dec 23 11:42:03.183: INFO: Created: latency-svc-sn4n9 Dec 23 11:42:03.214: INFO: Got endpoints: latency-svc-sn4n9 [2.989328221s] Dec 23 11:42:03.451: INFO: Created: latency-svc-xm2wr Dec 23 11:42:03.490: INFO: Got endpoints: latency-svc-xm2wr [3.09437482s] Dec 23 11:42:03.627: INFO: Created: latency-svc-q2q7p Dec 23 11:42:03.827: INFO: Got endpoints: latency-svc-q2q7p [3.207620872s] Dec 23 11:42:04.035: INFO: Created: latency-svc-cx4wj Dec 23 11:42:04.035: INFO: Created: latency-svc-r4vjj Dec 23 11:42:04.056: INFO: Got endpoints: latency-svc-r4vjj [3.226970209s] Dec 23 11:42:04.076: INFO: Got endpoints: latency-svc-cx4wj [3.036516621s] Dec 23 11:42:04.231: INFO: Created: latency-svc-r7sj2 Dec 23 11:42:04.245: INFO: Got endpoints: latency-svc-r7sj2 [3.01609481s] Dec 23 11:42:04.412: INFO: Created: latency-svc-4z4c4 Dec 23 11:42:04.499: INFO: Got endpoints: latency-svc-4z4c4 [3.081990845s] Dec 23 11:42:04.544: INFO: Created: latency-svc-mjzg5 Dec 23 11:42:04.656: INFO: Got endpoints: latency-svc-mjzg5 [2.971226436s] Dec 23 11:42:04.668: INFO: Created: latency-svc-6gvzd Dec 23 11:42:04.681: INFO: Got endpoints: latency-svc-6gvzd [2.776669517s] Dec 23 11:42:04.773: INFO: Created: latency-svc-6cw6l Dec 23 11:42:04.869: INFO: Got endpoints: latency-svc-6cw6l [2.717523887s] Dec 23 11:42:04.909: INFO: Created: latency-svc-zthgv Dec 23 11:42:04.909: INFO: Got endpoints: latency-svc-zthgv [2.581549512s] Dec 23 11:42:04.953: INFO: Created: latency-svc-9x6qr Dec 23 11:42:05.063: INFO: Got endpoints: latency-svc-9x6qr [2.528607636s] Dec 23 11:42:05.088: INFO: Created: latency-svc-dxsvv Dec 23 11:42:05.324: INFO: Got endpoints: latency-svc-dxsvv [2.704217193s] Dec 23 11:42:05.326: INFO: Created: latency-svc-dp5ww Dec 23 11:42:05.347: INFO: Got endpoints: latency-svc-dp5ww [2.394461668s] Dec 23 11:42:05.401: INFO: Created: latency-svc-vb5g2 Dec 23 11:42:05.412: INFO: Got endpoints: latency-svc-vb5g2 [2.41160804s] Dec 23 11:42:05.521: INFO: Created: latency-svc-66kj8 Dec 23 11:42:05.534: INFO: Got endpoints: latency-svc-66kj8 [2.319551342s] Dec 23 11:42:05.585: INFO: Created: latency-svc-crhqw Dec 23 11:42:05.697: INFO: Got endpoints: latency-svc-crhqw [2.207082446s] Dec 23 11:42:05.751: INFO: Created: latency-svc-pg6wk Dec 23 11:42:05.777: INFO: Got endpoints: latency-svc-pg6wk [1.949424967s] Dec 23 11:42:05.991: INFO: Created: latency-svc-d8zp9 Dec 23 11:42:05.992: INFO: Got endpoints: latency-svc-d8zp9 [1.936356289s] Dec 23 11:42:06.156: INFO: Created: latency-svc-zr6zw Dec 23 11:42:06.335: INFO: Got endpoints: latency-svc-zr6zw [2.258008538s] Dec 23 11:42:06.344: INFO: Created: latency-svc-fndt8 Dec 23 11:42:06.355: INFO: Got endpoints: latency-svc-fndt8 [2.109549139s] Dec 23 11:42:06.415: INFO: Created: latency-svc-hn45t Dec 23 11:42:06.547: INFO: Got endpoints: latency-svc-hn45t [2.047610391s] Dec 23 11:42:06.755: INFO: Created: latency-svc-bws7x Dec 23 11:42:06.764: INFO: Got endpoints: latency-svc-bws7x [2.10823825s] Dec 23 11:42:06.920: INFO: Created: latency-svc-qks78 Dec 23 11:42:06.940: INFO: Got endpoints: latency-svc-qks78 [2.258923009s] Dec 23 11:42:06.996: INFO: Created: latency-svc-kd7zm Dec 23 11:42:07.064: INFO: Got endpoints: latency-svc-kd7zm [2.194968425s] Dec 23 11:42:07.064: INFO: Latencies: [109.284338ms 177.775453ms 357.405679ms 535.436944ms 814.520074ms 1.005154964s 1.28975565s 1.507249106s 1.751396319s 1.801966425s 1.936356289s 1.949424967s 1.961324633s 2.004045476s 2.018904224s 2.047610391s 2.059874534s 2.063083921s 2.069223021s 2.079979318s 2.080493854s 2.096369236s 2.099464229s 2.10823825s 2.109549139s 2.110774744s 2.114828579s 2.122882504s 2.129535684s 2.133519568s 2.134954445s 2.139743116s 2.155788253s 2.165792199s 2.173461885s 2.178104853s 2.185526005s 2.187936132s 2.193073744s 2.194968425s 2.20370753s 2.207082446s 2.218930696s 2.232213324s 2.236648559s 2.238303094s 2.245003561s 2.258008538s 2.258923009s 2.259990676s 2.27565208s 2.285198398s 2.286117693s 2.289760116s 2.296823255s 2.319551342s 2.323470604s 2.324450239s 2.324557734s 2.32564308s 2.327444315s 2.32762545s 2.389017447s 2.394461668s 2.396048274s 2.410765478s 2.41160804s 2.430675249s 2.451646118s 2.451989841s 2.483703497s 2.486606936s 2.509686742s 2.524232103s 2.52460671s 2.526531139s 2.528607636s 2.547022877s 2.555058787s 2.561461892s 2.57806436s 2.579895757s 2.58052451s 2.581549512s 2.582126799s 2.592288753s 2.594667512s 2.597209168s 2.601009427s 2.611383951s 2.611681361s 2.612343187s 2.614802136s 2.616992905s 2.6294871s 2.636030528s 2.649363475s 2.656046906s 2.666922662s 2.689153405s 2.704217193s 2.704822309s 2.706046583s 2.717523887s 2.726392547s 2.731233784s 2.742769159s 2.745141525s 2.745821388s 2.765198104s 2.771024306s 2.772109805s 2.776669517s 2.78119749s 2.802154306s 2.825774055s 2.844807671s 2.858940254s 2.873016166s 2.883451547s 2.892137901s 2.899628907s 2.907707364s 2.923415764s 2.937515645s 2.943350407s 2.946170682s 2.9708725s 2.971226436s 2.971595755s 2.989328221s 3.01609481s 3.036516621s 3.064351654s 3.081990845s 3.09437482s 3.097998097s 3.109114045s 3.127557703s 3.137781829s 3.166562541s 3.201392407s 3.207620872s 3.219134783s 3.226970209s 3.232147745s 3.233886819s 3.240194748s 3.258285228s 3.27656636s 3.287739359s 3.287898616s 3.308075335s 3.315307876s 3.317742821s 3.336155796s 3.362235094s 3.363046974s 3.387869554s 3.418946282s 3.427285603s 3.438649557s 3.439907843s 3.454055164s 3.460091549s 3.466311939s 3.482232252s 3.495155618s 3.507605559s 3.51257644s 3.516668569s 3.519734368s 3.520617363s 3.524842603s 3.562542251s 3.580521174s 3.591796774s 3.612361725s 3.61745306s 3.670982732s 3.693265202s 3.713839509s 3.729092228s 3.762193465s 3.790754541s 3.877863571s 3.878272231s 3.95639653s 4.023579101s 4.030950826s 4.071998041s 4.139583759s 4.154155672s 4.187086703s 4.193533807s 4.26975188s 4.282672922s 4.300041339s 4.347671123s 4.442491091s] Dec 23 11:42:07.065: INFO: 50 %ile: 2.704217193s Dec 23 11:42:07.065: INFO: 90 %ile: 3.693265202s Dec 23 11:42:07.065: INFO: 99 %ile: 4.347671123s Dec 23 11:42:07.065: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:42:07.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-sd2cw" for this suite. Dec 23 11:43:11.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:43:11.164: INFO: namespace: e2e-tests-svc-latency-sd2cw, resource: bindings, ignored listing per whitelist Dec 23 11:43:11.228: INFO: namespace e2e-tests-svc-latency-sd2cw deletion completed in 1m4.15190371s • [SLOW TEST:114.590 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:43:11.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 23 11:43:11.420: INFO: Waiting up to 5m0s for pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-hzsl6" to be "success or failure" Dec 23 11:43:11.428: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.648252ms Dec 23 11:43:13.443: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022748058s Dec 23 11:43:15.457: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036381762s Dec 23 11:43:17.490: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070289718s Dec 23 11:43:19.510: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089539469s Dec 23 11:43:21.523: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103062746s STEP: Saw pod success Dec 23 11:43:21.523: INFO: Pod "pod-659a2d04-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:43:21.529: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-659a2d04-2579-11ea-a9d2-0242ac110005 container test-container: STEP: delete the pod Dec 23 11:43:22.152: INFO: Waiting for pod pod-659a2d04-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:43:22.603: INFO: Pod pod-659a2d04-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:43:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hzsl6" for this suite. Dec 23 11:43:28.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:43:28.787: INFO: namespace: e2e-tests-emptydir-hzsl6, resource: bindings, ignored listing per whitelist Dec 23 11:43:28.854: INFO: namespace e2e-tests-emptydir-hzsl6 deletion completed in 6.233204565s • [SLOW TEST:17.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:43:28.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-701a699d-2579-11ea-a9d2-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-701a6984-2579-11ea-a9d2-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 23 11:43:29.052: INFO: Waiting up to 5m0s for pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-94xvg" to be "success or failure" Dec 23 11:43:29.136: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 83.097677ms Dec 23 11:43:31.385: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332648908s Dec 23 11:43:33.403: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350201211s Dec 23 11:43:35.420: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367050614s Dec 23 11:43:37.436: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38376615s Dec 23 11:43:39.455: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.402259953s STEP: Saw pod success Dec 23 11:43:39.455: INFO: Pod "projected-volume-701a6859-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:43:39.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-701a6859-2579-11ea-a9d2-0242ac110005 container projected-all-volume-test: STEP: delete the pod Dec 23 11:43:40.576: INFO: Waiting for pod projected-volume-701a6859-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:43:40.915: INFO: Pod projected-volume-701a6859-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:43:40.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-94xvg" for this suite. Dec 23 11:43:47.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:43:47.407: INFO: namespace: e2e-tests-projected-94xvg, resource: bindings, ignored listing per whitelist Dec 23 11:43:47.442: INFO: namespace e2e-tests-projected-94xvg deletion completed in 6.511616869s • [SLOW TEST:18.587 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:43:47.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Dec 23 11:43:55.937: INFO: Pod pod-hostip-7b4fa892-2579-11ea-a9d2-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:43:55.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8vws8" for this suite. Dec 23 11:44:20.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:44:20.132: INFO: namespace: e2e-tests-pods-8vws8, resource: bindings, ignored listing per whitelist Dec 23 11:44:20.190: INFO: namespace e2e-tests-pods-8vws8 deletion completed in 24.241501153s • [SLOW TEST:32.748 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:44:20.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 23 11:44:20.387: INFO: Waiting up to 5m0s for pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-kchwz" to be "success or failure" Dec 23 11:44:20.404: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.517266ms Dec 23 11:44:22.440: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053056545s Dec 23 11:44:24.451: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06422795s Dec 23 11:44:26.687: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29971011s Dec 23 11:44:28.732: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34539247s Dec 23 11:44:30.752: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.364976854s STEP: Saw pod success Dec 23 11:44:30.752: INFO: Pod "downward-api-8eb13821-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:44:30.758: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8eb13821-2579-11ea-a9d2-0242ac110005 container dapi-container: STEP: delete the pod Dec 23 11:44:30.920: INFO: Waiting for pod downward-api-8eb13821-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:44:30.977: INFO: Pod downward-api-8eb13821-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:44:30.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kchwz" for this suite. Dec 23 11:44:37.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:44:37.745: INFO: namespace: e2e-tests-downward-api-kchwz, resource: bindings, ignored listing per whitelist Dec 23 11:44:37.977: INFO: namespace e2e-tests-downward-api-kchwz deletion completed in 6.990764872s • [SLOW TEST:17.786 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:44:37.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 23 11:44:38.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-86qpv" to be "success or failure" Dec 23 11:44:38.633: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.783882ms Dec 23 11:44:40.901: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321839192s Dec 23 11:44:42.923: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344239688s Dec 23 11:44:44.946: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366690919s Dec 23 11:44:46.989: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40939442s Dec 23 11:44:49.012: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.432510046s STEP: Saw pod success Dec 23 11:44:49.012: INFO: Pod "downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:44:49.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005 container client-container: STEP: delete the pod Dec 23 11:44:49.119: INFO: Waiting for pod downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:44:49.127: INFO: Pod downwardapi-volume-997240c6-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:44:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-86qpv" for this suite. Dec 23 11:44:55.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:44:55.285: INFO: namespace: e2e-tests-downward-api-86qpv, resource: bindings, ignored listing per whitelist Dec 23 11:44:55.509: INFO: namespace e2e-tests-downward-api-86qpv deletion completed in 6.368410676s • [SLOW TEST:17.532 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:44:55.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 23 11:44:55.749: INFO: Waiting up to 5m0s for pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-thj8w" to be "success or failure" Dec 23 11:44:55.766: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.97361ms Dec 23 11:44:58.079: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329772793s Dec 23 11:45:00.121: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372448394s Dec 23 11:45:02.235: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.485511032s Dec 23 11:45:04.317: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568342721s Dec 23 11:45:06.446: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.696806317s STEP: Saw pod success Dec 23 11:45:06.446: INFO: Pod "downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure" Dec 23 11:45:06.453: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005 container dapi-container: STEP: delete the pod Dec 23 11:45:06.898: INFO: Waiting for pod downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005 to disappear Dec 23 11:45:06.911: INFO: Pod downward-api-a3c5e8e7-2579-11ea-a9d2-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:45:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-thj8w" for this suite. Dec 23 11:45:12.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:45:13.087: INFO: namespace: e2e-tests-downward-api-thj8w, resource: bindings, ignored listing per whitelist Dec 23 11:45:13.112: INFO: namespace e2e-tests-downward-api-thj8w deletion completed in 6.192081858s • [SLOW TEST:17.602 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:45:13.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 23 11:45:13.326: INFO: PodSpec: initContainers in spec.initContainers Dec 23 11:46:18.746: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ae452a1c-2579-11ea-a9d2-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-b78rg", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-b78rg/pods/pod-init-ae452a1c-2579-11ea-a9d2-0242ac110005", UID:"ae4994e2-2579-11ea-a994-fa163e34d433", ResourceVersion:"15786664", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712698313, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"326048918"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2rp94", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002568000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2rp94", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2rp94", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2rp94", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024a6138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c96000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024a61c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024a6250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024a6258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024a625c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712698313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712698313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712698313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712698313, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001c18040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00218c070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00218c0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b04af95a26d7518cb574950381edaf11e82f91f6e4c83209cddb91e6e526c742"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c18080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c18060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 23 11:46:18.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-b78rg" for this suite. Dec 23 11:46:42.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 11:46:42.958: INFO: namespace: e2e-tests-init-container-b78rg, resource: bindings, ignored listing per whitelist Dec 23 11:46:43.036: INFO: namespace e2e-tests-init-container-b78rg deletion completed in 24.279099277s • [SLOW TEST:89.924 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 23 11:46:43.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 23 11:46:43.447: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.918233ms)
Dec 23 11:46:43.454: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.48979ms)
Dec 23 11:46:43.459: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.554658ms)
Dec 23 11:46:43.465: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.49659ms)
Dec 23 11:46:43.469: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.405289ms)
Dec 23 11:46:43.474: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.494753ms)
Dec 23 11:46:43.479: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.277014ms)
Dec 23 11:46:43.484: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.824065ms)
Dec 23 11:46:43.490: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.354952ms)
Dec 23 11:46:43.495: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.688515ms)
Dec 23 11:46:43.500: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.485882ms)
Dec 23 11:46:43.504: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.106632ms)
Dec 23 11:46:43.511: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.969731ms)
Dec 23 11:46:43.561: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.654963ms)
Dec 23 11:46:43.573: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.825741ms)
Dec 23 11:46:43.586: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.220393ms)
Dec 23 11:46:43.616: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.12046ms)
Dec 23 11:46:43.627: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.614967ms)
Dec 23 11:46:43.634: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.079438ms)
Dec 23 11:46:43.640: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.913234ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:46:43.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-sxbfv" for this suite.
Dec 23 11:46:49.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:46:49.844: INFO: namespace: e2e-tests-proxy-sxbfv, resource: bindings, ignored listing per whitelist
Dec 23 11:46:49.919: INFO: namespace e2e-tests-proxy-sxbfv deletion completed in 6.272891799s

• [SLOW TEST:6.881 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:46:49.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-z9mk7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-z9mk7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 11:47:06.233: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.252: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.265: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.272: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.276: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.280: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.284: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.290: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.295: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.313: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.320: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.329: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.337: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.412: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.425: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.446: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.493: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.518: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.543: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.564: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005)
Dec 23 11:47:06.564: INFO: Lookups using e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-z9mk7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 23 11:47:11.916: INFO: DNS probes using e2e-tests-dns-z9mk7/dns-test-e7e72a50-2579-11ea-a9d2-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:47:12.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-z9mk7" for this suite.
Dec 23 11:47:20.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:47:20.496: INFO: namespace: e2e-tests-dns-z9mk7, resource: bindings, ignored listing per whitelist
Dec 23 11:47:20.611: INFO: namespace e2e-tests-dns-z9mk7 deletion completed in 8.3989903s

• [SLOW TEST:30.692 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:47:20.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-pmgbw/configmap-test-fa41c2cb-2579-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 11:47:20.857: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-pmgbw" to be "success or failure"
Dec 23 11:47:20.872: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.497253ms
Dec 23 11:47:22.889: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0319152s
Dec 23 11:47:24.918: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060839823s
Dec 23 11:47:26.998: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141080669s
Dec 23 11:47:29.018: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160886137s
Dec 23 11:47:31.068: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211358747s
STEP: Saw pod success
Dec 23 11:47:31.069: INFO: Pod "pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:47:31.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005 container env-test: 
STEP: delete the pod
Dec 23 11:47:31.796: INFO: Waiting for pod pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:47:32.176: INFO: Pod pod-configmaps-fa426982-2579-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:47:32.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pmgbw" for this suite.
Dec 23 11:47:38.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:47:38.574: INFO: namespace: e2e-tests-configmap-pmgbw, resource: bindings, ignored listing per whitelist
Dec 23 11:47:38.591: INFO: namespace e2e-tests-configmap-pmgbw deletion completed in 6.383831443s

• [SLOW TEST:17.979 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:47:38.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 23 11:47:38.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-6gq8q run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 23 11:47:51.528: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 23 11:47:51.528: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:47:53.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6gq8q" for this suite.
Dec 23 11:47:59.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:48:00.141: INFO: namespace: e2e-tests-kubectl-6gq8q, resource: bindings, ignored listing per whitelist
Dec 23 11:48:00.151: INFO: namespace e2e-tests-kubectl-6gq8q deletion completed in 6.598640807s

• [SLOW TEST:21.560 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:48:00.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zwmlp in namespace e2e-tests-proxy-vpmsb
I1223 11:48:00.578906       8 runners.go:184] Created replication controller with name: proxy-service-zwmlp, namespace: e2e-tests-proxy-vpmsb, replica count: 1
I1223 11:48:01.630851       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:02.631440       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:03.634354       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:04.635420       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:05.636133       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:06.636805       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:07.637424       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:08.638212       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:09.638839       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 11:48:10.639528       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:11.640131       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:12.640587       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:13.641211       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:14.641685       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:15.642174       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:16.642665       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1223 11:48:17.643466       8 runners.go:184] proxy-service-zwmlp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 23 11:48:17.657: INFO: setup took 17.277447086s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 23 11:48:17.693: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vpmsb/pods/proxy-service-zwmlp-ggd44:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 23 11:48:41.213: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:49:00.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4wxgg" for this suite.
Dec 23 11:49:06.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:49:06.612: INFO: namespace: e2e-tests-init-container-4wxgg, resource: bindings, ignored listing per whitelist
Dec 23 11:49:06.662: INFO: namespace e2e-tests-init-container-4wxgg deletion completed in 6.319300238s

• [SLOW TEST:25.673 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:49:06.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 23 11:49:06.941: INFO: Waiting up to 5m0s for pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-54njs" to be "success or failure"
Dec 23 11:49:06.951: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256117ms
Dec 23 11:49:08.972: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030728823s
Dec 23 11:49:10.983: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041826816s
Dec 23 11:49:13.009: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068213233s
Dec 23 11:49:15.020: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07925371s
Dec 23 11:49:17.038: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097239879s
STEP: Saw pod success
Dec 23 11:49:17.038: INFO: Pod "pod-3981b71d-257a-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:49:17.043: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3981b71d-257a-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 11:49:17.087: INFO: Waiting for pod pod-3981b71d-257a-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:49:17.093: INFO: Pod pod-3981b71d-257a-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:49:17.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-54njs" for this suite.
Dec 23 11:49:23.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:49:23.431: INFO: namespace: e2e-tests-emptydir-54njs, resource: bindings, ignored listing per whitelist
Dec 23 11:49:23.522: INFO: namespace e2e-tests-emptydir-54njs deletion completed in 6.421489577s

• [SLOW TEST:16.861 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:49:23.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-43a7c719-257a-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 11:49:24.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-df4qt" to be "success or failure"
Dec 23 11:49:24.095: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.005176ms
Dec 23 11:49:26.114: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031814037s
Dec 23 11:49:28.165: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083768712s
Dec 23 11:49:30.547: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465615027s
Dec 23 11:49:32.580: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4980123s
Dec 23 11:49:34.645: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563338763s
STEP: Saw pod success
Dec 23 11:49:34.646: INFO: Pod "pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:49:34.688: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 11:49:35.532: INFO: Waiting for pod pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:49:35.933: INFO: Pod pod-projected-configmaps-43ab7c67-257a-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:49:35.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-df4qt" for this suite.
Dec 23 11:49:44.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:49:44.286: INFO: namespace: e2e-tests-projected-df4qt, resource: bindings, ignored listing per whitelist
Dec 23 11:49:44.294: INFO: namespace e2e-tests-projected-df4qt deletion completed in 8.34816252s

• [SLOW TEST:20.770 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:49:44.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-4ff6c858-257a-11ea-a9d2-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-4ff6cc18-257a-11ea-a9d2-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4ff6c858-257a-11ea-a9d2-0242ac110005
STEP: Updating configmap cm-test-opt-upd-4ff6cc18-257a-11ea-a9d2-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-4ff6ccda-257a-11ea-a9d2-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:50:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7j865" for this suite.
Dec 23 11:50:29.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:50:29.285: INFO: namespace: e2e-tests-configmap-7j865, resource: bindings, ignored listing per whitelist
Dec 23 11:50:29.318: INFO: namespace e2e-tests-configmap-7j865 deletion completed in 24.214738664s

• [SLOW TEST:45.023 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:50:29.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vdrhp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 11:50:29.464: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 11:51:05.896: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-vdrhp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 11:51:05.896: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 11:51:07.615: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:51:07.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vdrhp" for this suite.
Dec 23 11:51:33.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:51:34.142: INFO: namespace: e2e-tests-pod-network-test-vdrhp, resource: bindings, ignored listing per whitelist
Dec 23 11:51:34.142: INFO: namespace e2e-tests-pod-network-test-vdrhp deletion completed in 26.499248693s

• [SLOW TEST:64.824 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:51:34.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-w4qqn
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w4qqn to expose endpoints map[]
Dec 23 11:51:34.791: INFO: Get endpoints failed (24.434597ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 23 11:51:35.833: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w4qqn exposes endpoints map[] (1.065977957s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-w4qqn
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w4qqn to expose endpoints map[pod1:[80]]
Dec 23 11:51:40.234: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.330898915s elapsed, will retry)
Dec 23 11:51:46.036: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.132931689s elapsed, will retry)
Dec 23 11:51:47.056: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w4qqn exposes endpoints map[pod1:[80]] (11.15243655s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-w4qqn
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w4qqn to expose endpoints map[pod1:[80] pod2:[80]]
Dec 23 11:51:51.484: INFO: Unexpected endpoints: found map[9247f956-257a-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.417407577s elapsed, will retry)
Dec 23 11:51:57.809: INFO: Unexpected endpoints: found map[9247f956-257a-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (10.742421529s elapsed, will retry)
Dec 23 11:51:58.827: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w4qqn exposes endpoints map[pod1:[80] pod2:[80]] (11.760780737s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-w4qqn
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w4qqn to expose endpoints map[pod2:[80]]
Dec 23 11:52:00.274: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w4qqn exposes endpoints map[pod2:[80]] (1.439548601s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-w4qqn
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w4qqn to expose endpoints map[]
Dec 23 11:52:01.482: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w4qqn exposes endpoints map[] (1.192551181s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:52:01.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-w4qqn" for this suite.
Dec 23 11:52:25.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:52:25.803: INFO: namespace: e2e-tests-services-w4qqn, resource: bindings, ignored listing per whitelist
Dec 23 11:52:25.918: INFO: namespace e2e-tests-services-w4qqn deletion completed in 24.246324426s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.776 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:52:25.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-vcnn7/configmap-test-b031cbf9-257a-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 11:52:26.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-vcnn7" to be "success or failure"
Dec 23 11:52:26.198: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.763421ms
Dec 23 11:52:28.214: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065458247s
Dec 23 11:52:30.233: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084377192s
Dec 23 11:52:32.500: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352105887s
Dec 23 11:52:35.375: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226707724s
Dec 23 11:52:37.401: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.25246037s
STEP: Saw pod success
Dec 23 11:52:37.401: INFO: Pod "pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:52:37.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005 container env-test: 
STEP: delete the pod
Dec 23 11:52:37.765: INFO: Waiting for pod pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:52:37.782: INFO: Pod pod-configmaps-b033f0be-257a-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:52:37.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vcnn7" for this suite.
Dec 23 11:52:43.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:52:44.057: INFO: namespace: e2e-tests-configmap-vcnn7, resource: bindings, ignored listing per whitelist
Dec 23 11:52:44.199: INFO: namespace e2e-tests-configmap-vcnn7 deletion completed in 6.385923463s

• [SLOW TEST:18.280 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:52:44.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 23 11:52:44.318: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 23 11:52:44.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:44.761: INFO: stderr: ""
Dec 23 11:52:44.761: INFO: stdout: "service/redis-slave created\n"
Dec 23 11:52:44.762: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 23 11:52:44.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:45.268: INFO: stderr: ""
Dec 23 11:52:45.268: INFO: stdout: "service/redis-master created\n"
Dec 23 11:52:45.270: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 23 11:52:45.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:45.957: INFO: stderr: ""
Dec 23 11:52:45.958: INFO: stdout: "service/frontend created\n"
Dec 23 11:52:45.959: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 23 11:52:45.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:46.688: INFO: stderr: ""
Dec 23 11:52:46.688: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 23 11:52:46.689: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 23 11:52:46.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:47.204: INFO: stderr: ""
Dec 23 11:52:47.205: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 23 11:52:47.206: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 23 11:52:47.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:52:47.713: INFO: stderr: ""
Dec 23 11:52:47.713: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 23 11:52:47.713: INFO: Waiting for all frontend pods to be Running.
Dec 23 11:53:17.767: INFO: Waiting for frontend to serve content.
Dec 23 11:53:18.039: INFO: Trying to add a new entry to the guestbook.
Dec 23 11:53:18.088: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 23 11:53:18.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:18.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:18.645: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 11:53:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:19.170: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:19.170: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 11:53:19.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:19.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:19.358: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 11:53:19.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:19.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:19.498: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 11:53:19.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:20.132: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:20.132: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 11:53:20.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ktqrs'
Dec 23 11:53:20.348: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 11:53:20.348: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:53:20.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ktqrs" for this suite.
Dec 23 11:54:04.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:54:04.643: INFO: namespace: e2e-tests-kubectl-ktqrs, resource: bindings, ignored listing per whitelist
Dec 23 11:54:04.748: INFO: namespace e2e-tests-kubectl-ktqrs deletion completed in 44.38020265s

• [SLOW TEST:80.549 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:54:04.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 11:54:04.997: INFO: Creating ReplicaSet my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005
Dec 23 11:54:05.038: INFO: Pod name my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005: Found 0 pods out of 1
Dec 23 11:54:10.053: INFO: Pod name my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005: Found 1 pods out of 1
Dec 23 11:54:10.054: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005" is running
Dec 23 11:54:16.064: INFO: Pod "my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005-ghhlg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:54:05 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:54:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:54:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 11:54:05 +0000 UTC Reason: Message:}])
Dec 23 11:54:16.064: INFO: Trying to dial the pod
Dec 23 11:54:21.117: INFO: Controller my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005: Got expected result from replica 1 [my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005-ghhlg]: "my-hostname-basic-eb2bd67e-257a-11ea-a9d2-0242ac110005-ghhlg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:54:21.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ntcsl" for this suite.
Dec 23 11:54:29.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:54:29.268: INFO: namespace: e2e-tests-replicaset-ntcsl, resource: bindings, ignored listing per whitelist
Dec 23 11:54:29.376: INFO: namespace e2e-tests-replicaset-ntcsl deletion completed in 8.244056226s

• [SLOW TEST:24.627 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:54:29.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-rjl8d;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-rjl8d;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rjl8d.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 36.219.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.219.36_udp@PTR;check="$$(dig +tcp +noall +answer +search 36.219.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.219.36_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-rjl8d;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rjl8d.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rjl8d.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 36.219.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.219.36_udp@PTR;check="$$(dig +tcp +noall +answer +search 36.219.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.219.36_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 11:54:49.672: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.684: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.696: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-rjl8d from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.714: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.723: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.740: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.758: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.770: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.793: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.807: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.814: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005: the server could not find the requested resource (get pods dns-test-fab84978-257a-11ea-a9d2-0242ac110005)
Dec 23 11:54:49.826: INFO: Lookups using e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-rjl8d jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d jessie_udp@dns-test-service.e2e-tests-dns-rjl8d.svc jessie_tcp@dns-test-service.e2e-tests-dns-rjl8d.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rjl8d.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rjl8d.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 23 11:54:55.156: INFO: DNS probes using e2e-tests-dns-rjl8d/dns-test-fab84978-257a-11ea-a9d2-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:54:57.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-rjl8d" for this suite.
Dec 23 11:55:03.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:55:04.100: INFO: namespace: e2e-tests-dns-rjl8d, resource: bindings, ignored listing per whitelist
Dec 23 11:55:04.109: INFO: namespace e2e-tests-dns-rjl8d deletion completed in 6.542561271s

• [SLOW TEST:34.731 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:55:04.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 23 11:55:04.408: INFO: Waiting up to 5m0s for pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-containers-246xm" to be "success or failure"
Dec 23 11:55:04.422: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.318568ms
Dec 23 11:55:06.451: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042413642s
Dec 23 11:55:08.490: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082044664s
Dec 23 11:55:10.768: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360155679s
Dec 23 11:55:12.863: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455112569s
Dec 23 11:55:14.885: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.4766061s
Dec 23 11:55:17.188: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.780196563s
STEP: Saw pod success
Dec 23 11:55:17.189: INFO: Pod "client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:55:17.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 11:55:17.388: INFO: Waiting for pod client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:55:17.412: INFO: Pod client-containers-0e8f0e1c-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:55:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-246xm" for this suite.
Dec 23 11:55:23.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:55:23.507: INFO: namespace: e2e-tests-containers-246xm, resource: bindings, ignored listing per whitelist
Dec 23 11:55:23.717: INFO: namespace e2e-tests-containers-246xm deletion completed in 6.294826629s

• [SLOW TEST:19.608 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:55:23.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1a4ebcff-257b-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 11:55:24.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-9s295" to be "success or failure"
Dec 23 11:55:24.270: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.071365ms
Dec 23 11:55:26.308: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06133344s
Dec 23 11:55:28.353: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106834877s
Dec 23 11:55:30.619: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373172367s
Dec 23 11:55:32.658: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411447829s
Dec 23 11:55:34.765: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.518946188s
STEP: Saw pod success
Dec 23 11:55:34.766: INFO: Pod "pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:55:34.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 23 11:55:35.281: INFO: Waiting for pod pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:55:35.399: INFO: Pod pod-configmaps-1a515367-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:55:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9s295" for this suite.
Dec 23 11:55:41.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:55:41.580: INFO: namespace: e2e-tests-configmap-9s295, resource: bindings, ignored listing per whitelist
Dec 23 11:55:41.789: INFO: namespace e2e-tests-configmap-9s295 deletion completed in 6.383909217s

• [SLOW TEST:18.072 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:55:41.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 11:55:52.423: INFO: Waiting up to 5m0s for pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-pods-nwcgv" to be "success or failure"
Dec 23 11:55:52.522: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.759793ms
Dec 23 11:55:54.841: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417206048s
Dec 23 11:55:56.861: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437186261s
Dec 23 11:55:59.401: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.977171153s
Dec 23 11:56:01.419: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.995022148s
Dec 23 11:56:03.438: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.014835604s
STEP: Saw pod success
Dec 23 11:56:03.439: INFO: Pod "client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:56:03.445: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 23 11:56:04.219: INFO: Waiting for pod client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:56:04.504: INFO: Pod client-envvars-2b2e1b2e-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:56:04.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nwcgv" for this suite.
Dec 23 11:56:48.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:56:48.802: INFO: namespace: e2e-tests-pods-nwcgv, resource: bindings, ignored listing per whitelist
Dec 23 11:56:48.839: INFO: namespace e2e-tests-pods-nwcgv deletion completed in 44.313713965s

• [SLOW TEST:67.048 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:56:48.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 23 11:56:49.063: INFO: Waiting up to 5m0s for pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-tvwvj" to be "success or failure"
Dec 23 11:56:49.075: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.794962ms
Dec 23 11:56:51.384: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320800754s
Dec 23 11:56:53.406: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34233986s
Dec 23 11:56:55.926: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.862748312s
Dec 23 11:56:57.954: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890004583s
Dec 23 11:56:59.982: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.918274866s
STEP: Saw pod success
Dec 23 11:56:59.982: INFO: Pod "pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:57:00.028: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 11:57:00.344: INFO: Waiting for pod pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:57:00.380: INFO: Pod pod-4cf3bc8a-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:57:00.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tvwvj" for this suite.
Dec 23 11:57:08.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:57:08.761: INFO: namespace: e2e-tests-emptydir-tvwvj, resource: bindings, ignored listing per whitelist
Dec 23 11:57:08.884: INFO: namespace e2e-tests-emptydir-tvwvj deletion completed in 8.460637152s

• [SLOW TEST:20.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:57:08.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 11:57:09.998: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 23 11:57:10.130: INFO: Number of nodes with available pods: 0
Dec 23 11:57:10.130: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:11.343: INFO: Number of nodes with available pods: 0
Dec 23 11:57:11.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:12.626: INFO: Number of nodes with available pods: 0
Dec 23 11:57:12.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:13.174: INFO: Number of nodes with available pods: 0
Dec 23 11:57:13.174: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:14.244: INFO: Number of nodes with available pods: 0
Dec 23 11:57:14.245: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:15.159: INFO: Number of nodes with available pods: 0
Dec 23 11:57:15.159: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:16.571: INFO: Number of nodes with available pods: 0
Dec 23 11:57:16.572: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:17.193: INFO: Number of nodes with available pods: 0
Dec 23 11:57:17.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:18.145: INFO: Number of nodes with available pods: 0
Dec 23 11:57:18.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:19.157: INFO: Number of nodes with available pods: 0
Dec 23 11:57:19.157: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:20.324: INFO: Number of nodes with available pods: 1
Dec 23 11:57:20.325: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 23 11:57:20.520: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:21.599: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:22.906: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:23.695: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:24.613: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:25.793: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:26.680: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:27.605: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:27.605: INFO: Pod daemon-set-j2hcp is not available
Dec 23 11:57:28.597: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:28.597: INFO: Pod daemon-set-j2hcp is not available
Dec 23 11:57:29.599: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:29.600: INFO: Pod daemon-set-j2hcp is not available
Dec 23 11:57:30.631: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:30.631: INFO: Pod daemon-set-j2hcp is not available
Dec 23 11:57:31.729: INFO: Wrong image for pod: daemon-set-j2hcp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 23 11:57:31.729: INFO: Pod daemon-set-j2hcp is not available
Dec 23 11:57:34.133: INFO: Pod daemon-set-jcx6t is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 23 11:57:34.569: INFO: Number of nodes with available pods: 0
Dec 23 11:57:34.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:35.595: INFO: Number of nodes with available pods: 0
Dec 23 11:57:35.595: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:36.653: INFO: Number of nodes with available pods: 0
Dec 23 11:57:36.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:37.600: INFO: Number of nodes with available pods: 0
Dec 23 11:57:37.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:38.949: INFO: Number of nodes with available pods: 0
Dec 23 11:57:38.949: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:39.623: INFO: Number of nodes with available pods: 0
Dec 23 11:57:39.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:40.602: INFO: Number of nodes with available pods: 0
Dec 23 11:57:40.602: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:41.620: INFO: Number of nodes with available pods: 0
Dec 23 11:57:41.620: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 11:57:42.745: INFO: Number of nodes with available pods: 1
Dec 23 11:57:42.745: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5vzcx, will wait for the garbage collector to delete the pods
Dec 23 11:57:43.033: INFO: Deleting DaemonSet.extensions daemon-set took: 31.730267ms
Dec 23 11:57:43.334: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.875635ms
Dec 23 11:57:52.740: INFO: Number of nodes with available pods: 0
Dec 23 11:57:52.740: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 11:57:52.744: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5vzcx/daemonsets","resourceVersion":"15788320"},"items":null}

Dec 23 11:57:52.747: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5vzcx/pods","resourceVersion":"15788320"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:57:52.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5vzcx" for this suite.
Dec 23 11:58:00.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:58:00.982: INFO: namespace: e2e-tests-daemonsets-5vzcx, resource: bindings, ignored listing per whitelist
Dec 23 11:58:00.991: INFO: namespace e2e-tests-daemonsets-5vzcx deletion completed in 8.230237632s

• [SLOW TEST:52.107 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:58:00.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 11:58:01.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-7qw57" to be "success or failure"
Dec 23 11:58:01.237: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.271956ms
Dec 23 11:58:03.928: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.718464072s
Dec 23 11:58:05.978: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.768789574s
Dec 23 11:58:08.613: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.40330222s
Dec 23 11:58:10.634: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.424655852s
Dec 23 11:58:12.711: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.501209582s
STEP: Saw pod success
Dec 23 11:58:12.711: INFO: Pod "downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:58:12.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 11:58:12.975: INFO: Waiting for pod downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:58:13.006: INFO: Pod downwardapi-volume-77efc2ff-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:58:13.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7qw57" for this suite.
Dec 23 11:58:19.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:58:19.348: INFO: namespace: e2e-tests-projected-7qw57, resource: bindings, ignored listing per whitelist
Dec 23 11:58:19.545: INFO: namespace e2e-tests-projected-7qw57 deletion completed in 6.523269366s

• [SLOW TEST:18.554 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:58:19.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 11:58:19.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-64lmg" to be "success or failure"
Dec 23 11:58:19.900: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 158.396533ms
Dec 23 11:58:21.920: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178862562s
Dec 23 11:58:23.946: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205061209s
Dec 23 11:58:26.197: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455194614s
Dec 23 11:58:28.246: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504610295s
Dec 23 11:58:30.793: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.051402509s
STEP: Saw pod success
Dec 23 11:58:30.793: INFO: Pod "downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:58:30.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 11:58:31.272: INFO: Waiting for pod downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:58:31.287: INFO: Pod downwardapi-volume-83012248-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:58:31.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-64lmg" for this suite.
Dec 23 11:58:37.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:58:37.581: INFO: namespace: e2e-tests-projected-64lmg, resource: bindings, ignored listing per whitelist
Dec 23 11:58:37.661: INFO: namespace e2e-tests-projected-64lmg deletion completed in 6.314815714s

• [SLOW TEST:18.116 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:58:37.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 11:58:37.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-x4gz5" to be "success or failure"
Dec 23 11:58:38.192: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 280.220191ms
Dec 23 11:58:40.214: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301396882s
Dec 23 11:58:42.257: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34455806s
Dec 23 11:58:44.288: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.376291461s
Dec 23 11:58:46.318: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405528916s
Dec 23 11:58:48.345: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.433090662s
STEP: Saw pod success
Dec 23 11:58:48.345: INFO: Pod "downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 11:58:48.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 11:58:48.636: INFO: Waiting for pod downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 11:58:48.656: INFO: Pod downwardapi-volume-8dd4a285-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:58:48.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x4gz5" for this suite.
Dec 23 11:58:56.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 11:58:56.829: INFO: namespace: e2e-tests-downward-api-x4gz5, resource: bindings, ignored listing per whitelist
Dec 23 11:58:56.985: INFO: namespace e2e-tests-downward-api-x4gz5 deletion completed in 8.312638661s

• [SLOW TEST:19.324 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 11:58:56.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 23 11:58:57.201: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788488,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 11:58:57.202: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788488,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 23 11:59:07.223: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788501,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 23 11:59:07.223: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788501,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 23 11:59:17.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788514,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 11:59:17.267: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788514,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 23 11:59:27.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788526,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 11:59:27.317: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-a,UID:994cd3c4-257b-11ea-a994-fa163e34d433,ResourceVersion:15788526,Generation:0,CreationTimestamp:2019-12-23 11:58:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 23 11:59:37.352: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-b,UID:b141efc5-257b-11ea-a994-fa163e34d433,ResourceVersion:15788539,Generation:0,CreationTimestamp:2019-12-23 11:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 11:59:37.353: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-b,UID:b141efc5-257b-11ea-a994-fa163e34d433,ResourceVersion:15788539,Generation:0,CreationTimestamp:2019-12-23 11:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 23 11:59:47.379: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-b,UID:b141efc5-257b-11ea-a994-fa163e34d433,ResourceVersion:15788552,Generation:0,CreationTimestamp:2019-12-23 11:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 11:59:47.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-hprt7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hprt7/configmaps/e2e-watch-test-configmap-b,UID:b141efc5-257b-11ea-a994-fa163e34d433,ResourceVersion:15788552,Generation:0,CreationTimestamp:2019-12-23 11:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 11:59:57.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hprt7" for this suite.
Dec 23 12:00:03.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:00:03.689: INFO: namespace: e2e-tests-watch-hprt7, resource: bindings, ignored listing per whitelist
Dec 23 12:00:03.818: INFO: namespace e2e-tests-watch-hprt7 deletion completed in 6.398222528s

• [SLOW TEST:66.832 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:00:03.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 23 12:00:04.155: INFO: Waiting up to 5m0s for pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-f5wlz" to be "success or failure"
Dec 23 12:00:04.235: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 80.483903ms
Dec 23 12:00:06.296: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140761242s
Dec 23 12:00:08.330: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174966703s
Dec 23 12:00:10.349: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194126669s
Dec 23 12:00:12.372: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216825056s
Dec 23 12:00:14.653: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.498202717s
STEP: Saw pod success
Dec 23 12:00:14.653: INFO: Pod "pod-c13909e0-257b-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:00:14.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c13909e0-257b-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 12:00:14.903: INFO: Waiting for pod pod-c13909e0-257b-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:00:14.997: INFO: Pod pod-c13909e0-257b-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:00:14.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f5wlz" for this suite.
Dec 23 12:00:21.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:00:21.147: INFO: namespace: e2e-tests-emptydir-f5wlz, resource: bindings, ignored listing per whitelist
Dec 23 12:00:21.197: INFO: namespace e2e-tests-emptydir-f5wlz deletion completed in 6.168862558s

• [SLOW TEST:17.379 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:00:21.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 23 12:00:21.480: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788626,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 12:00:21.480: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788627,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 23 12:00:21.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788628,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 23 12:00:31.577: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788642,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 12:00:31.577: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788643,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 23 12:00:31.578: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6c7rv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6c7rv/configmaps/e2e-watch-test-label-changed,UID:cb7f6df3-257b-11ea-a994-fa163e34d433,ResourceVersion:15788644,Generation:0,CreationTimestamp:2019-12-23 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:00:31.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6c7rv" for this suite.
Dec 23 12:00:37.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:00:37.824: INFO: namespace: e2e-tests-watch-6c7rv, resource: bindings, ignored listing per whitelist
Dec 23 12:00:37.904: INFO: namespace e2e-tests-watch-6c7rv deletion completed in 6.309048143s

• [SLOW TEST:16.706 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:00:37.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 12:00:38.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qwn95'
Dec 23 12:00:40.227: INFO: stderr: ""
Dec 23 12:00:40.228: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 23 12:00:40.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qwn95'
Dec 23 12:00:43.010: INFO: stderr: ""
Dec 23 12:00:43.010: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:00:43.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qwn95" for this suite.
Dec 23 12:00:49.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:00:49.246: INFO: namespace: e2e-tests-kubectl-qwn95, resource: bindings, ignored listing per whitelist
Dec 23 12:00:49.496: INFO: namespace e2e-tests-kubectl-qwn95 deletion completed in 6.453060103s

• [SLOW TEST:11.592 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:00:49.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:00:49.709: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.146948ms)
Dec 23 12:00:49.715: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.852949ms)
Dec 23 12:00:49.720: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.089572ms)
Dec 23 12:00:49.725: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.33682ms)
Dec 23 12:00:49.730: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.150342ms)
Dec 23 12:00:49.736: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.92ms)
Dec 23 12:00:49.741: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.216326ms)
Dec 23 12:00:49.748: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.65548ms)
Dec 23 12:00:49.753: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.582998ms)
Dec 23 12:00:49.758: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.93219ms)
Dec 23 12:00:49.777: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.946774ms)
Dec 23 12:00:49.795: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.305374ms)
Dec 23 12:00:49.809: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.996145ms)
Dec 23 12:00:49.817: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.691562ms)
Dec 23 12:00:49.826: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.063084ms)
Dec 23 12:00:49.833: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.548187ms)
Dec 23 12:00:49.839: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.036356ms)
Dec 23 12:00:49.847: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.922803ms)
Dec 23 12:00:49.854: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.223705ms)
Dec 23 12:00:49.863: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.483304ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:00:49.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-s9z5g" for this suite.
Dec 23 12:00:55.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:00:56.058: INFO: namespace: e2e-tests-proxy-s9z5g, resource: bindings, ignored listing per whitelist
Dec 23 12:00:56.081: INFO: namespace e2e-tests-proxy-s9z5g deletion completed in 6.212573959s

• [SLOW TEST:6.584 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:00:56.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 12:00:56.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:00:56.574: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 12:00:56.575: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 23 12:00:56.642: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 23 12:00:56.682: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 23 12:00:56.732: INFO: scanned /root for discovery docs: 
Dec 23 12:00:56.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:01:24.241: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 12:01:24.241: INFO: stdout: "Created e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381\nScaling up e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 23 12:01:24.241: INFO: stdout: "Created e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381\nScaling up e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 23 12:01:24.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:01:24.401: INFO: stderr: ""
Dec 23 12:01:24.401: INFO: stdout: "e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381-5n76v "
Dec 23 12:01:24.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381-5n76v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:01:24.627: INFO: stderr: ""
Dec 23 12:01:24.627: INFO: stdout: "true"
Dec 23 12:01:24.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381-5n76v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:01:24.769: INFO: stderr: ""
Dec 23 12:01:24.769: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 23 12:01:24.769: INFO: e2e-test-nginx-rc-b8b3e5964185a7be872665243a712381-5n76v is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 23 12:01:24.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rgfhh'
Dec 23 12:01:24.999: INFO: stderr: ""
Dec 23 12:01:24.999: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:01:24.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rgfhh" for this suite.
Dec 23 12:01:49.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:01:49.378: INFO: namespace: e2e-tests-kubectl-rgfhh, resource: bindings, ignored listing per whitelist
Dec 23 12:01:49.399: INFO: namespace e2e-tests-kubectl-rgfhh deletion completed in 24.343734091s

• [SLOW TEST:53.318 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:01:49.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-00194ef9-257c-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:01:49.623: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-dfxvp" to be "success or failure"
Dec 23 12:01:49.637: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119271ms
Dec 23 12:01:51.910: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287756919s
Dec 23 12:01:53.934: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311469771s
Dec 23 12:01:55.964: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340806933s
Dec 23 12:01:57.997: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374088578s
Dec 23 12:02:00.021: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.398101864s
Dec 23 12:02:02.047: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.424300966s
STEP: Saw pod success
Dec 23 12:02:02.047: INFO: Pod "pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:02:02.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 23 12:02:03.165: INFO: Waiting for pod pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:02:03.180: INFO: Pod pod-projected-secrets-001a71a6-257c-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:02:03.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dfxvp" for this suite.
Dec 23 12:02:09.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:02:09.423: INFO: namespace: e2e-tests-projected-dfxvp, resource: bindings, ignored listing per whitelist
Dec 23 12:02:09.613: INFO: namespace e2e-tests-projected-dfxvp deletion completed in 6.413510643s

• [SLOW TEST:20.214 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:02:09.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 23 12:02:09.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-n9njv,SelfLink:/api/v1/namespaces/e2e-tests-watch-n9njv/configmaps/e2e-watch-test-resource-version,UID:0c234aba-257c-11ea-a994-fa163e34d433,ResourceVersion:15788907,Generation:0,CreationTimestamp:2019-12-23 12:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 12:02:09.877: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-n9njv,SelfLink:/api/v1/namespaces/e2e-tests-watch-n9njv/configmaps/e2e-watch-test-resource-version,UID:0c234aba-257c-11ea-a994-fa163e34d433,ResourceVersion:15788908,Generation:0,CreationTimestamp:2019-12-23 12:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:02:09.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-n9njv" for this suite.
Dec 23 12:02:15.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:02:16.123: INFO: namespace: e2e-tests-watch-n9njv, resource: bindings, ignored listing per whitelist
Dec 23 12:02:16.259: INFO: namespace e2e-tests-watch-n9njv deletion completed in 6.377222432s

• [SLOW TEST:6.645 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:02:16.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:02:16.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:02:26.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-w6lqj" for this suite.
Dec 23 12:03:16.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:03:16.910: INFO: namespace: e2e-tests-pods-w6lqj, resource: bindings, ignored listing per whitelist
Dec 23 12:03:17.006: INFO: namespace e2e-tests-pods-w6lqj deletion completed in 50.184865617s

• [SLOW TEST:60.747 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:03:17.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-345ae4fe-257c-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:03:17.325: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-cjwq7" to be "success or failure"
Dec 23 12:03:17.375: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.688248ms
Dec 23 12:03:19.388: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062617959s
Dec 23 12:03:21.412: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086083908s
Dec 23 12:03:23.482: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15675828s
Dec 23 12:03:25.500: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.174029685s
Dec 23 12:03:27.527: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201928014s
STEP: Saw pod success
Dec 23 12:03:27.528: INFO: Pod "pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:03:27.578: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 12:03:27.766: INFO: Waiting for pod pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:03:27.788: INFO: Pod pod-projected-secrets-345cc301-257c-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:03:27.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cjwq7" for this suite.
Dec 23 12:03:33.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:03:34.022: INFO: namespace: e2e-tests-projected-cjwq7, resource: bindings, ignored listing per whitelist
Dec 23 12:03:34.088: INFO: namespace e2e-tests-projected-cjwq7 deletion completed in 6.279685809s

• [SLOW TEST:17.081 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:03:34.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 23 12:06:37.750: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:37.837: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:39.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:39.859: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:41.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:41.875: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:43.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:43.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:45.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:45.855: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:47.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:47.860: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:49.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:49.876: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:51.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:51.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:53.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:53.859: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:55.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:55.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:57.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:57.860: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:06:59.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:06:59.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:01.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:01.891: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:03.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:03.861: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:05.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:05.868: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:07.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:07.863: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:09.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:09.862: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:11.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:11.861: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:13.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:13.890: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:15.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:15.858: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:17.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:17.874: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:19.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:19.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:21.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:21.850: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:23.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:23.854: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:25.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:25.854: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:27.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:27.860: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:29.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:29.847: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:31.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:31.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:33.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:33.859: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:35.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:35.884: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:37.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:38.076: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:39.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:39.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:41.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:41.885: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:43.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:43.862: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:45.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:45.859: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:47.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:47.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:49.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:49.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:51.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:51.849: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:53.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:53.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:55.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:55.858: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:57.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:57.861: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:07:59.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:07:59.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:01.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:01.864: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:03.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:03.880: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:05.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:05.853: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:07.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:07.862: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:09.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:09.863: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:11.839: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:11.910: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:13.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:13.857: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:15.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:15.855: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:17.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:17.854: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:19.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:19.863: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:21.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:21.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:23.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:23.866: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:25.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:25.871: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:27.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:27.859: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:29.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:29.854: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:31.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:31.869: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 23 12:08:33.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 23 12:08:33.876: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:08:33.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jf46f" for this suite.
Dec 23 12:08:57.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:08:58.091: INFO: namespace: e2e-tests-container-lifecycle-hook-jf46f, resource: bindings, ignored listing per whitelist
Dec 23 12:08:58.134: INFO: namespace e2e-tests-container-lifecycle-hook-jf46f deletion completed in 24.240649035s

• [SLOW TEST:324.046 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:08:58.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 23 12:08:58.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vlhtf'
Dec 23 12:08:59.215: INFO: stderr: ""
Dec 23 12:08:59.215: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 23 12:09:01.065: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:01.065: INFO: Found 0 / 1
Dec 23 12:09:01.241: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:01.242: INFO: Found 0 / 1
Dec 23 12:09:02.231: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:02.231: INFO: Found 0 / 1
Dec 23 12:09:03.305: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:03.305: INFO: Found 0 / 1
Dec 23 12:09:04.235: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:04.235: INFO: Found 0 / 1
Dec 23 12:09:05.248: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:05.248: INFO: Found 0 / 1
Dec 23 12:09:06.234: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:06.234: INFO: Found 0 / 1
Dec 23 12:09:07.265: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:07.265: INFO: Found 0 / 1
Dec 23 12:09:08.241: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:08.241: INFO: Found 0 / 1
Dec 23 12:09:09.241: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:09.241: INFO: Found 1 / 1
Dec 23 12:09:09.241: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 23 12:09:09.262: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:09.262: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 23 12:09:09.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-c77gh --namespace=e2e-tests-kubectl-vlhtf -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 23 12:09:09.493: INFO: stderr: ""
Dec 23 12:09:09.494: INFO: stdout: "pod/redis-master-c77gh patched\n"
STEP: checking annotations
Dec 23 12:09:09.519: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:09:09.519: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:09:09.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vlhtf" for this suite.
Dec 23 12:09:33.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:09:33.729: INFO: namespace: e2e-tests-kubectl-vlhtf, resource: bindings, ignored listing per whitelist
Dec 23 12:09:33.821: INFO: namespace e2e-tests-kubectl-vlhtf deletion completed in 24.294989936s

• [SLOW TEST:35.686 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:09:33.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 23 12:09:34.270: INFO: Waiting up to 5m0s for pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-bqvpr" to be "success or failure"
Dec 23 12:09:34.296: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.946372ms
Dec 23 12:09:36.390: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119875884s
Dec 23 12:09:38.426: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155914509s
Dec 23 12:09:40.663: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393309144s
Dec 23 12:09:42.700: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42997308s
Dec 23 12:09:44.740: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.470320246s
STEP: Saw pod success
Dec 23 12:09:44.741: INFO: Pod "downward-api-1500023e-257d-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:09:44.751: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1500023e-257d-11ea-a9d2-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 23 12:09:44.888: INFO: Waiting for pod downward-api-1500023e-257d-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:09:44.899: INFO: Pod downward-api-1500023e-257d-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:09:44.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bqvpr" for this suite.
Dec 23 12:09:51.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:09:51.157: INFO: namespace: e2e-tests-downward-api-bqvpr, resource: bindings, ignored listing per whitelist
Dec 23 12:09:51.232: INFO: namespace e2e-tests-downward-api-bqvpr deletion completed in 6.326090642s

• [SLOW TEST:17.408 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:09:51.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-5mnv
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 12:09:52.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5mnv" in namespace "e2e-tests-subpath-zph95" to be "success or failure"
Dec 23 12:09:52.673: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.047513ms
Dec 23 12:09:54.799: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136610279s
Dec 23 12:09:56.832: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169327707s
Dec 23 12:09:58.853: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190311273s
Dec 23 12:10:00.892: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229572161s
Dec 23 12:10:02.915: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.252727439s
Dec 23 12:10:04.936: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.273485092s
Dec 23 12:10:06.959: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.296757325s
Dec 23 12:10:09.158: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.495878928s
Dec 23 12:10:11.199: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 18.536470422s
Dec 23 12:10:13.268: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 20.605374252s
Dec 23 12:10:15.287: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 22.624671132s
Dec 23 12:10:17.301: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 24.638365757s
Dec 23 12:10:19.348: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 26.685373404s
Dec 23 12:10:21.372: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 28.709876213s
Dec 23 12:10:23.392: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 30.730089786s
Dec 23 12:10:25.416: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Running", Reason="", readiness=false. Elapsed: 32.754123021s
Dec 23 12:10:27.457: INFO: Pod "pod-subpath-test-projected-5mnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.794446846s
STEP: Saw pod success
Dec 23 12:10:27.457: INFO: Pod "pod-subpath-test-projected-5mnv" satisfied condition "success or failure"
Dec 23 12:10:27.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-5mnv container test-container-subpath-projected-5mnv: 
STEP: delete the pod
Dec 23 12:10:27.813: INFO: Waiting for pod pod-subpath-test-projected-5mnv to disappear
Dec 23 12:10:27.851: INFO: Pod pod-subpath-test-projected-5mnv no longer exists
STEP: Deleting pod pod-subpath-test-projected-5mnv
Dec 23 12:10:27.851: INFO: Deleting pod "pod-subpath-test-projected-5mnv" in namespace "e2e-tests-subpath-zph95"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:10:27.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zph95" for this suite.
Dec 23 12:10:34.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:10:34.238: INFO: namespace: e2e-tests-subpath-zph95, resource: bindings, ignored listing per whitelist
Dec 23 12:10:34.262: INFO: namespace e2e-tests-subpath-zph95 deletion completed in 6.383483194s

• [SLOW TEST:43.029 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:10:34.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 23 12:10:54.721: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 23 12:10:54.753: INFO: Pod pod-with-poststart-http-hook still exists
Dec 23 12:10:56.753: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 23 12:10:56.789: INFO: Pod pod-with-poststart-http-hook still exists
Dec 23 12:10:58.753: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 23 12:10:58.776: INFO: Pod pod-with-poststart-http-hook still exists
Dec 23 12:11:00.753: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 23 12:11:00.774: INFO: Pod pod-with-poststart-http-hook still exists
Dec 23 12:11:02.754: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 23 12:11:02.867: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:11:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gnrl4" for this suite.
Dec 23 12:11:26.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:11:26.990: INFO: namespace: e2e-tests-container-lifecycle-hook-gnrl4, resource: bindings, ignored listing per whitelist
Dec 23 12:11:27.087: INFO: namespace e2e-tests-container-lifecycle-hook-gnrl4 deletion completed in 24.208048618s

• [SLOW TEST:52.825 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:11:27.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 12:11:27.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-2fckc" to be "success or failure"
Dec 23 12:11:27.374: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.464773ms
Dec 23 12:11:29.780: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435058501s
Dec 23 12:11:31.849: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503266101s
Dec 23 12:11:33.877: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531131668s
Dec 23 12:11:35.951: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605405806s
Dec 23 12:11:37.964: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.618400107s
Dec 23 12:11:39.981: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.63533205s
STEP: Saw pod success
Dec 23 12:11:39.981: INFO: Pod "downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:11:39.985: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 12:11:40.703: INFO: Waiting for pod downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:11:40.741: INFO: Pod downwardapi-volume-5872c928-257d-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:11:40.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2fckc" for this suite.
Dec 23 12:11:46.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:11:47.113: INFO: namespace: e2e-tests-projected-2fckc, resource: bindings, ignored listing per whitelist
Dec 23 12:11:47.143: INFO: namespace e2e-tests-projected-2fckc deletion completed in 6.330745149s

• [SLOW TEST:20.055 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:11:47.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 23 12:11:47.449: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix164430780/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:11:47.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4g8qx" for this suite.
Dec 23 12:11:53.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:11:53.804: INFO: namespace: e2e-tests-kubectl-4g8qx, resource: bindings, ignored listing per whitelist
Dec 23 12:11:53.884: INFO: namespace e2e-tests-kubectl-4g8qx deletion completed in 6.313871118s

• [SLOW TEST:6.741 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:11:53.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-68725618-257d-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 12:11:54.249: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-ws296" to be "success or failure"
Dec 23 12:11:54.280: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.385767ms
Dec 23 12:11:56.597: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34768419s
Dec 23 12:11:58.618: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368801075s
Dec 23 12:12:00.646: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397654436s
Dec 23 12:12:02.656: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.407209218s
Dec 23 12:12:04.708: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459100357s
STEP: Saw pod success
Dec 23 12:12:04.708: INFO: Pod "pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:12:04.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 12:12:05.135: INFO: Waiting for pod pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:12:05.159: INFO: Pod pod-projected-configmaps-68740410-257d-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:12:05.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ws296" for this suite.
Dec 23 12:12:11.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:12:11.405: INFO: namespace: e2e-tests-projected-ws296, resource: bindings, ignored listing per whitelist
Dec 23 12:12:11.451: INFO: namespace e2e-tests-projected-ws296 deletion completed in 6.174666369s

• [SLOW TEST:17.567 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:12:11.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gzc6m
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-gzc6m
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-gzc6m
Dec 23 12:12:11.661: INFO: Found 0 stateful pods, waiting for 1
Dec 23 12:12:21.680: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 23 12:12:21.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 23 12:12:22.757: INFO: stderr: ""
Dec 23 12:12:22.758: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 23 12:12:22.758: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 23 12:12:22.804: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 23 12:12:32.867: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 12:12:32.867: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 12:12:32.917: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999989074s
Dec 23 12:12:34.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982127107s
Dec 23 12:12:35.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.881020821s
Dec 23 12:12:36.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.829618645s
Dec 23 12:12:37.112: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.809247886s
Dec 23 12:12:38.139: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.787671304s
Dec 23 12:12:39.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.760198028s
Dec 23 12:12:40.184: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.735788175s
Dec 23 12:12:41.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.715494098s
Dec 23 12:12:42.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 690.932586ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-gzc6m
Dec 23 12:12:43.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 23 12:12:44.024: INFO: stderr: ""
Dec 23 12:12:44.024: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 23 12:12:44.025: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 23 12:12:44.058: INFO: Found 1 stateful pods, waiting for 3
Dec 23 12:12:54.072: INFO: Found 2 stateful pods, waiting for 3
Dec 23 12:13:04.071: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:13:04.071: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:13:04.071: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 12:13:14.092: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:13:14.092: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:13:14.092: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 23 12:13:14.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 23 12:13:14.824: INFO: stderr: ""
Dec 23 12:13:14.824: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 23 12:13:14.824: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 23 12:13:14.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 23 12:13:15.625: INFO: stderr: ""
Dec 23 12:13:15.625: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 23 12:13:15.625: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 23 12:13:15.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 23 12:13:16.242: INFO: stderr: ""
Dec 23 12:13:16.242: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 23 12:13:16.242: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 23 12:13:16.242: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 12:13:16.283: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 23 12:13:26.411: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 12:13:26.411: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 12:13:26.411: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 12:13:26.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999992414s
Dec 23 12:13:27.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973532732s
Dec 23 12:13:28.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.95465897s
Dec 23 12:13:29.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.914830596s
Dec 23 12:13:30.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.89480398s
Dec 23 12:13:31.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.808532267s
Dec 23 12:13:32.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.783084313s
Dec 23 12:13:33.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.760659531s
Dec 23 12:13:35.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.486945407s
Dec 23 12:13:36.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 462.146917ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-gzc6m
Dec 23 12:13:37.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 23 12:13:37.717: INFO: stderr: ""
Dec 23 12:13:37.717: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 23 12:13:37.718: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 23 12:13:37.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 23 12:13:38.585: INFO: stderr: ""
Dec 23 12:13:38.585: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 23 12:13:38.585: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 23 12:13:38.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzc6m ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 23 12:13:38.983: INFO: stderr: ""
Dec 23 12:13:38.983: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 23 12:13:38.984: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 23 12:13:38.984: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 23 12:13:59.042: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gzc6m
Dec 23 12:13:59.086: INFO: Scaling statefulset ss to 0
Dec 23 12:13:59.125: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 12:13:59.141: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:13:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gzc6m" for this suite.
Dec 23 12:14:07.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:14:07.360: INFO: namespace: e2e-tests-statefulset-gzc6m, resource: bindings, ignored listing per whitelist
Dec 23 12:14:07.453: INFO: namespace e2e-tests-statefulset-gzc6m deletion completed in 8.272458542s

• [SLOW TEST:116.002 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:14:07.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:14:07.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 23 12:14:07.766: INFO: stderr: ""
Dec 23 12:14:07.766: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 23 12:14:07.774: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:14:07.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jvtkv" for this suite.
Dec 23 12:14:13.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:14:14.261: INFO: namespace: e2e-tests-kubectl-jvtkv, resource: bindings, ignored listing per whitelist
Dec 23 12:14:14.504: INFO: namespace e2e-tests-kubectl-jvtkv deletion completed in 6.685624198s

S [SKIPPING] [7.049 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 23 12:14:07.774: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:14:14.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 23 12:14:25.579: INFO: Successfully updated pod "annotationupdatebc5f15aa-257d-11ea-a9d2-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:14:29.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zmv97" for this suite.
Dec 23 12:14:57.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:14:57.923: INFO: namespace: e2e-tests-downward-api-zmv97, resource: bindings, ignored listing per whitelist
Dec 23 12:14:57.990: INFO: namespace e2e-tests-downward-api-zmv97 deletion completed in 28.242150087s

• [SLOW TEST:43.485 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:14:57.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 23 12:15:08.300: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d623789d-257d-11ea-a9d2-0242ac110005,GenerateName:,Namespace:e2e-tests-events-k88vk,SelfLink:/api/v1/namespaces/e2e-tests-events-k88vk/pods/send-events-d623789d-257d-11ea-a9d2-0242ac110005,UID:d624cd29-257d-11ea-a994-fa163e34d433,ResourceVersion:15790380,Generation:0,CreationTimestamp:2019-12-23 12:14:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 201036870,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-76prz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76prz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-76prz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021f5fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021f5fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:14:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:15:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:15:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:14:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-23 12:14:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-23 12:15:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://c3717ba977d573fe9198e0d33ccdc59b83a487d8670ef26c6f737f4bd8b1c7ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 23 12:15:10.322: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 23 12:15:12.338: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:15:12.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-k88vk" for this suite.
Dec 23 12:15:54.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:15:54.680: INFO: namespace: e2e-tests-events-k88vk, resource: bindings, ignored listing per whitelist
Dec 23 12:15:54.721: INFO: namespace e2e-tests-events-k88vk deletion completed in 42.277471068s

• [SLOW TEST:56.730 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:15:54.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f80380e3-257d-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 12:15:55.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-k5klt" to be "success or failure"
Dec 23 12:15:55.090: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.242762ms
Dec 23 12:15:57.467: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396629314s
Dec 23 12:15:59.494: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423735985s
Dec 23 12:16:01.761: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689903093s
Dec 23 12:16:03.784: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713732142s
Dec 23 12:16:06.676: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.605496023s
STEP: Saw pod success
Dec 23 12:16:06.677: INFO: Pod "pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:16:06.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 12:16:07.181: INFO: Waiting for pod pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:16:07.193: INFO: Pod pod-projected-configmaps-f804e5c9-257d-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:16:07.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k5klt" for this suite.
Dec 23 12:16:13.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:16:13.337: INFO: namespace: e2e-tests-projected-k5klt, resource: bindings, ignored listing per whitelist
Dec 23 12:16:13.465: INFO: namespace e2e-tests-projected-k5klt deletion completed in 6.214249956s

• [SLOW TEST:18.744 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:16:13.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-r4ht9
Dec 23 12:16:24.150: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-r4ht9
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 12:16:24.156: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:20:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-r4ht9" for this suite.
Dec 23 12:20:31.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:20:31.615: INFO: namespace: e2e-tests-container-probe-r4ht9, resource: bindings, ignored listing per whitelist
Dec 23 12:20:31.701: INFO: namespace e2e-tests-container-probe-r4ht9 deletion completed in 6.218873297s

• [SLOW TEST:258.234 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:20:31.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 23 12:20:42.046: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9d09aa11-257e-11ea-a9d2-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-pmqgr", SelfLink:"/api/v1/namespaces/e2e-tests-pods-pmqgr/pods/pod-submit-remove-9d09aa11-257e-11ea-a9d2-0242ac110005", UID:"9d0be20c-257e-11ea-a994-fa163e34d433", ResourceVersion:"15790844", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712700431, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"898497275"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5lt4q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001e42980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5lt4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e30b28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c623c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e30ba0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e30bc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e30bc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e30bcc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712700432, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712700440, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712700440, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712700431, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000866800), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000866820), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://544610d9fc7f7017e098160f0fd03bdb8e03dd9acf9fc64b18918c0bd0ffc4cd"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:20:52.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pmqgr" for this suite.
Dec 23 12:20:58.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:20:58.972: INFO: namespace: e2e-tests-pods-pmqgr, resource: bindings, ignored listing per whitelist
Dec 23 12:20:59.059: INFO: namespace e2e-tests-pods-pmqgr deletion completed in 6.341638673s

• [SLOW TEST:27.359 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:20:59.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 23 12:20:59.388: INFO: Waiting up to 5m0s for pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-r7p4g" to be "success or failure"
Dec 23 12:20:59.413: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.825112ms
Dec 23 12:21:01.792: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40416832s
Dec 23 12:21:03.814: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425709875s
Dec 23 12:21:05.964: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575764462s
Dec 23 12:21:07.995: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606670941s
Dec 23 12:21:10.016: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.628274466s
STEP: Saw pod success
Dec 23 12:21:10.016: INFO: Pod "pod-ad691d7d-257e-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:21:10.022: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ad691d7d-257e-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 12:21:10.265: INFO: Waiting for pod pod-ad691d7d-257e-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:21:10.276: INFO: Pod pod-ad691d7d-257e-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:21:10.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r7p4g" for this suite.
Dec 23 12:21:16.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:21:16.418: INFO: namespace: e2e-tests-emptydir-r7p4g, resource: bindings, ignored listing per whitelist
Dec 23 12:21:16.712: INFO: namespace e2e-tests-emptydir-r7p4g deletion completed in 6.427806485s

• [SLOW TEST:17.653 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:21:16.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b7ff3fdf-257e-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:21:17.156: INFO: Waiting up to 5m0s for pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-42nm9" to be "success or failure"
Dec 23 12:21:17.164: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.443757ms
Dec 23 12:21:19.267: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11038062s
Dec 23 12:21:21.290: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133075208s
Dec 23 12:21:23.361: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20406043s
Dec 23 12:21:25.387: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230889004s
Dec 23 12:21:27.421: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264221697s
STEP: Saw pod success
Dec 23 12:21:27.421: INFO: Pod "pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:21:27.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 23 12:21:27.840: INFO: Waiting for pod pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:21:30.147: INFO: Pod pod-secrets-b80114e7-257e-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:21:30.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-42nm9" for this suite.
Dec 23 12:21:36.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:21:36.289: INFO: namespace: e2e-tests-secrets-42nm9, resource: bindings, ignored listing per whitelist
Dec 23 12:21:36.443: INFO: namespace e2e-tests-secrets-42nm9 deletion completed in 6.266965029s

• [SLOW TEST:19.730 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:21:36.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mp5jj
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 12:21:36.861: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 12:22:15.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mp5jj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:22:15.167: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:22:15.662: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:22:15.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-mp5jj" for this suite.
Dec 23 12:22:41.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:22:41.900: INFO: namespace: e2e-tests-pod-network-test-mp5jj, resource: bindings, ignored listing per whitelist
Dec 23 12:22:41.913: INFO: namespace e2e-tests-pod-network-test-mp5jj deletion completed in 26.237493646s

• [SLOW TEST:65.469 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:22:41.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-86fw5
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-86fw5
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-86fw5
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-86fw5
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-86fw5
Dec 23 12:22:54.499: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-86fw5, name: ss-0, uid: f1a4a1de-257e-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 23 12:22:54.700: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-86fw5, name: ss-0, uid: f1a4a1de-257e-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 12:22:54.799: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-86fw5, name: ss-0, uid: f1a4a1de-257e-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 12:22:54.844: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-86fw5
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-86fw5
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-86fw5 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 23 12:23:08.339: INFO: Deleting all statefulset in ns e2e-tests-statefulset-86fw5
Dec 23 12:23:08.346: INFO: Scaling statefulset ss to 0
Dec 23 12:23:28.405: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 12:23:28.413: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:23:28.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-86fw5" for this suite.
Dec 23 12:23:36.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:23:36.777: INFO: namespace: e2e-tests-statefulset-86fw5, resource: bindings, ignored listing per whitelist
Dec 23 12:23:36.837: INFO: namespace e2e-tests-statefulset-86fw5 deletion completed in 8.206985686s

• [SLOW TEST:54.924 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:23:36.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0b65e574-257f-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:23:37.077: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-v7ctj" to be "success or failure"
Dec 23 12:23:37.088: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.173148ms
Dec 23 12:23:39.279: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200912384s
Dec 23 12:23:41.310: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231922268s
Dec 23 12:23:43.638: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560513661s
Dec 23 12:23:45.661: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582868291s
Dec 23 12:23:47.679: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601597804s
STEP: Saw pod success
Dec 23 12:23:47.679: INFO: Pod "pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:23:47.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 12:23:47.774: INFO: Waiting for pod pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:23:47.940: INFO: Pod pod-projected-secrets-0b682ef3-257f-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:23:47.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v7ctj" for this suite.
Dec 23 12:23:55.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:23:55.189: INFO: namespace: e2e-tests-projected-v7ctj, resource: bindings, ignored listing per whitelist
Dec 23 12:23:55.297: INFO: namespace e2e-tests-projected-v7ctj deletion completed in 7.348113897s

• [SLOW TEST:18.459 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:23:55.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 23 12:24:02.819: INFO: 10 pods remaining
Dec 23 12:24:02.819: INFO: 10 pods has nil DeletionTimestamp
Dec 23 12:24:02.819: INFO: 
Dec 23 12:24:03.933: INFO: 5 pods remaining
Dec 23 12:24:03.934: INFO: 0 pods has nil DeletionTimestamp
Dec 23 12:24:03.934: INFO: 
Dec 23 12:24:04.614: INFO: 0 pods remaining
Dec 23 12:24:04.614: INFO: 0 pods has nil DeletionTimestamp
Dec 23 12:24:04.614: INFO: 
STEP: Gathering metrics
W1223 12:24:05.431711       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 12:24:05.431: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:24:05.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qqdrx" for this suite.
Dec 23 12:24:19.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:24:19.746: INFO: namespace: e2e-tests-gc-qqdrx, resource: bindings, ignored listing per whitelist
Dec 23 12:24:19.854: INFO: namespace e2e-tests-gc-qqdrx deletion completed in 14.410203527s

• [SLOW TEST:24.556 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:24:19.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 23 12:24:46.313: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:46.314: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:46.847: INFO: Exec stderr: ""
Dec 23 12:24:46.847: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:46.847: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:47.223: INFO: Exec stderr: ""
Dec 23 12:24:47.223: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:47.223: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:47.634: INFO: Exec stderr: ""
Dec 23 12:24:47.634: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:47.634: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:47.988: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 23 12:24:47.988: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:47.988: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:48.391: INFO: Exec stderr: ""
Dec 23 12:24:48.392: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:48.392: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:48.830: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 23 12:24:48.830: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:48.830: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:49.160: INFO: Exec stderr: ""
Dec 23 12:24:49.160: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:49.161: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:49.490: INFO: Exec stderr: ""
Dec 23 12:24:49.491: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:49.491: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:49.870: INFO: Exec stderr: ""
Dec 23 12:24:49.870: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gk8l4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:24:49.871: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:24:50.133: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:24:50.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-gk8l4" for this suite.
Dec 23 12:25:46.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:25:46.315: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-gk8l4, resource: bindings, ignored listing per whitelist
Dec 23 12:25:46.339: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-gk8l4 deletion completed in 56.194925607s

• [SLOW TEST:86.484 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:25:46.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:25:53.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-fdzq5" for this suite.
Dec 23 12:25:59.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:25:59.535: INFO: namespace: e2e-tests-namespaces-fdzq5, resource: bindings, ignored listing per whitelist
Dec 23 12:25:59.621: INFO: namespace e2e-tests-namespaces-fdzq5 deletion completed in 6.375003838s
STEP: Destroying namespace "e2e-tests-nsdeletetest-mcw84" for this suite.
Dec 23 12:25:59.625: INFO: Namespace e2e-tests-nsdeletetest-mcw84 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-kgrmq" for this suite.
Dec 23 12:26:05.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:26:05.835: INFO: namespace: e2e-tests-nsdeletetest-kgrmq, resource: bindings, ignored listing per whitelist
Dec 23 12:26:05.895: INFO: namespace e2e-tests-nsdeletetest-kgrmq deletion completed in 6.270383279s

• [SLOW TEST:19.556 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:26:05.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 23 12:26:06.303: INFO: Number of nodes with available pods: 0
Dec 23 12:26:06.303: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:07.326: INFO: Number of nodes with available pods: 0
Dec 23 12:26:07.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:08.344: INFO: Number of nodes with available pods: 0
Dec 23 12:26:08.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:09.323: INFO: Number of nodes with available pods: 0
Dec 23 12:26:09.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:10.369: INFO: Number of nodes with available pods: 0
Dec 23 12:26:10.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:11.634: INFO: Number of nodes with available pods: 0
Dec 23 12:26:11.634: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:12.320: INFO: Number of nodes with available pods: 0
Dec 23 12:26:12.320: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:13.325: INFO: Number of nodes with available pods: 0
Dec 23 12:26:13.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:14.318: INFO: Number of nodes with available pods: 0
Dec 23 12:26:14.318: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:15.344: INFO: Number of nodes with available pods: 1
Dec 23 12:26:15.345: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 23 12:26:15.463: INFO: Number of nodes with available pods: 0
Dec 23 12:26:15.463: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:17.150: INFO: Number of nodes with available pods: 0
Dec 23 12:26:17.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:17.503: INFO: Number of nodes with available pods: 0
Dec 23 12:26:17.503: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:18.491: INFO: Number of nodes with available pods: 0
Dec 23 12:26:18.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:19.728: INFO: Number of nodes with available pods: 0
Dec 23 12:26:19.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:20.553: INFO: Number of nodes with available pods: 0
Dec 23 12:26:20.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:21.485: INFO: Number of nodes with available pods: 0
Dec 23 12:26:21.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:22.947: INFO: Number of nodes with available pods: 0
Dec 23 12:26:22.947: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:23.486: INFO: Number of nodes with available pods: 0
Dec 23 12:26:23.486: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:24.796: INFO: Number of nodes with available pods: 0
Dec 23 12:26:24.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:25.480: INFO: Number of nodes with available pods: 0
Dec 23 12:26:25.480: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:26:26.517: INFO: Number of nodes with available pods: 1
Dec 23 12:26:26.517: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sn56w, will wait for the garbage collector to delete the pods
Dec 23 12:26:26.689: INFO: Deleting DaemonSet.extensions daemon-set took: 24.753083ms
Dec 23 12:26:26.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.740325ms
Dec 23 12:26:42.618: INFO: Number of nodes with available pods: 0
Dec 23 12:26:42.619: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 12:26:42.645: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sn56w/daemonsets","resourceVersion":"15791767"},"items":null}

Dec 23 12:26:42.660: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sn56w/pods","resourceVersion":"15791767"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:26:42.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-sn56w" for this suite.
Dec 23 12:26:48.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:26:48.898: INFO: namespace: e2e-tests-daemonsets-sn56w, resource: bindings, ignored listing per whitelist
Dec 23 12:26:48.901: INFO: namespace e2e-tests-daemonsets-sn56w deletion completed in 6.160329144s

• [SLOW TEST:43.005 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:26:48.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-7ddc4dc3-257f-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 12:26:49.105: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-75qdz" to be "success or failure"
Dec 23 12:26:49.127: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.867922ms
Dec 23 12:26:51.152: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046261617s
Dec 23 12:26:53.163: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057951112s
Dec 23 12:26:55.952: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847016574s
Dec 23 12:26:57.981: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875314454s
Dec 23 12:27:00.003: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.898019663s
Dec 23 12:27:02.068: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.962708352s
STEP: Saw pod success
Dec 23 12:27:02.069: INFO: Pod "pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:27:02.088: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 12:27:02.484: INFO: Waiting for pod pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:27:02.496: INFO: Pod pod-projected-configmaps-7ddd0ade-257f-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:27:02.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-75qdz" for this suite.
Dec 23 12:27:08.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:27:08.706: INFO: namespace: e2e-tests-projected-75qdz, resource: bindings, ignored listing per whitelist
Dec 23 12:27:08.786: INFO: namespace e2e-tests-projected-75qdz deletion completed in 6.238911311s

• [SLOW TEST:19.885 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:27:08.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 23 12:27:08.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:11.315: INFO: stderr: ""
Dec 23 12:27:11.315: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 12:27:11.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:11.524: INFO: stderr: ""
Dec 23 12:27:11.524: INFO: stdout: "update-demo-nautilus-94tx6 update-demo-nautilus-zs64q "
Dec 23 12:27:11.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94tx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:11.736: INFO: stderr: ""
Dec 23 12:27:11.736: INFO: stdout: ""
Dec 23 12:27:11.736: INFO: update-demo-nautilus-94tx6 is created but not running
Dec 23 12:27:16.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:17.022: INFO: stderr: ""
Dec 23 12:27:17.023: INFO: stdout: "update-demo-nautilus-94tx6 update-demo-nautilus-zs64q "
Dec 23 12:27:17.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94tx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:17.185: INFO: stderr: ""
Dec 23 12:27:17.185: INFO: stdout: ""
Dec 23 12:27:17.185: INFO: update-demo-nautilus-94tx6 is created but not running
Dec 23 12:27:22.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:22.588: INFO: stderr: ""
Dec 23 12:27:22.589: INFO: stdout: "update-demo-nautilus-94tx6 update-demo-nautilus-zs64q "
Dec 23 12:27:22.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94tx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:22.844: INFO: stderr: ""
Dec 23 12:27:22.845: INFO: stdout: ""
Dec 23 12:27:22.845: INFO: update-demo-nautilus-94tx6 is created but not running
Dec 23 12:27:27.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:28.078: INFO: stderr: ""
Dec 23 12:27:28.078: INFO: stdout: "update-demo-nautilus-94tx6 update-demo-nautilus-zs64q "
Dec 23 12:27:28.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94tx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:28.256: INFO: stderr: ""
Dec 23 12:27:28.256: INFO: stdout: "true"
Dec 23 12:27:28.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94tx6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:28.581: INFO: stderr: ""
Dec 23 12:27:28.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 12:27:28.581: INFO: validating pod update-demo-nautilus-94tx6
Dec 23 12:27:28.749: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 12:27:28.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 12:27:28.750: INFO: update-demo-nautilus-94tx6 is verified up and running
Dec 23 12:27:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zs64q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:28.887: INFO: stderr: ""
Dec 23 12:27:28.887: INFO: stdout: "true"
Dec 23 12:27:28.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zs64q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:27:28.994: INFO: stderr: ""
Dec 23 12:27:28.994: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 12:27:28.994: INFO: validating pod update-demo-nautilus-zs64q
Dec 23 12:27:29.006: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 12:27:29.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 12:27:29.006: INFO: update-demo-nautilus-zs64q is verified up and running
STEP: rolling-update to new replication controller
Dec 23 12:27:29.008: INFO: scanned /root for discovery docs: 
Dec 23 12:27:29.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:01.820: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 12:28:01.820: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 12:28:01.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:02.080: INFO: stderr: ""
Dec 23 12:28:02.081: INFO: stdout: "update-demo-kitten-cnf8l update-demo-kitten-dnxft "
Dec 23 12:28:02.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cnf8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:02.323: INFO: stderr: ""
Dec 23 12:28:02.323: INFO: stdout: "true"
Dec 23 12:28:02.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cnf8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:02.569: INFO: stderr: ""
Dec 23 12:28:02.569: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 12:28:02.569: INFO: validating pod update-demo-kitten-cnf8l
Dec 23 12:28:02.687: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 12:28:02.687: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 12:28:02.687: INFO: update-demo-kitten-cnf8l is verified up and running
Dec 23 12:28:02.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dnxft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:02.806: INFO: stderr: ""
Dec 23 12:28:02.807: INFO: stdout: "true"
Dec 23 12:28:02.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dnxft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d65hv'
Dec 23 12:28:02.923: INFO: stderr: ""
Dec 23 12:28:02.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 12:28:02.924: INFO: validating pod update-demo-kitten-dnxft
Dec 23 12:28:02.942: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 12:28:02.942: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 12:28:02.942: INFO: update-demo-kitten-dnxft is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:28:02.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d65hv" for this suite.
Dec 23 12:28:26.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:28:27.023: INFO: namespace: e2e-tests-kubectl-d65hv, resource: bindings, ignored listing per whitelist
Dec 23 12:28:27.199: INFO: namespace e2e-tests-kubectl-d65hv deletion completed in 24.25100727s

• [SLOW TEST:78.412 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:28:27.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 23 12:28:27.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 23 12:28:27.601: INFO: stderr: ""
Dec 23 12:28:27.601: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:28:27.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j59pz" for this suite.
Dec 23 12:28:33.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:28:33.803: INFO: namespace: e2e-tests-kubectl-j59pz, resource: bindings, ignored listing per whitelist
Dec 23 12:28:33.901: INFO: namespace e2e-tests-kubectl-j59pz deletion completed in 6.285479319s

• [SLOW TEST:6.702 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:28:33.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vpzt
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 12:28:34.250: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vpzt" in namespace "e2e-tests-subpath-rbbpv" to be "success or failure"
Dec 23 12:28:34.265: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.574246ms
Dec 23 12:28:36.283: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032661808s
Dec 23 12:28:38.300: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04994239s
Dec 23 12:28:40.991: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74120525s
Dec 23 12:28:43.015: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.765327064s
Dec 23 12:28:45.025: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.774861806s
Dec 23 12:28:47.040: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.789962477s
Dec 23 12:28:49.076: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.82600375s
Dec 23 12:28:51.093: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 16.842855583s
Dec 23 12:28:53.105: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 18.85510686s
Dec 23 12:28:55.119: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 20.869243317s
Dec 23 12:28:57.137: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 22.887051231s
Dec 23 12:28:59.160: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 24.910499397s
Dec 23 12:29:01.180: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 26.930256914s
Dec 23 12:29:03.197: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 28.946872434s
Dec 23 12:29:05.223: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 30.973049129s
Dec 23 12:29:07.239: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 32.988629142s
Dec 23 12:29:09.255: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Running", Reason="", readiness=false. Elapsed: 35.004792836s
Dec 23 12:29:11.274: INFO: Pod "pod-subpath-test-configmap-vpzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.024472795s
STEP: Saw pod success
Dec 23 12:29:11.275: INFO: Pod "pod-subpath-test-configmap-vpzt" satisfied condition "success or failure"
Dec 23 12:29:11.281: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-vpzt container test-container-subpath-configmap-vpzt: 
STEP: delete the pod
Dec 23 12:29:12.112: INFO: Waiting for pod pod-subpath-test-configmap-vpzt to disappear
Dec 23 12:29:12.142: INFO: Pod pod-subpath-test-configmap-vpzt no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vpzt
Dec 23 12:29:12.143: INFO: Deleting pod "pod-subpath-test-configmap-vpzt" in namespace "e2e-tests-subpath-rbbpv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:29:12.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rbbpv" for this suite.
Dec 23 12:29:18.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:29:18.463: INFO: namespace: e2e-tests-subpath-rbbpv, resource: bindings, ignored listing per whitelist
Dec 23 12:29:18.502: INFO: namespace e2e-tests-subpath-rbbpv deletion completed in 6.336921036s

• [SLOW TEST:44.602 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:29:18.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 23 12:29:37.305: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:37.327: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:39.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:39.346: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:41.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:41.367: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:43.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:43.354: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:45.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:45.338: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:47.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:47.349: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:49.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:49.383: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:51.328: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:51.455: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:53.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:53.450: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:55.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:55.343: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:57.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:57.343: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:29:59.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:29:59.348: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:30:01.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:30:01.345: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 12:30:03.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 12:30:03.373: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:30:03.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7bk7p" for this suite.
Dec 23 12:30:29.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:30:29.585: INFO: namespace: e2e-tests-container-lifecycle-hook-7bk7p, resource: bindings, ignored listing per whitelist
Dec 23 12:30:29.612: INFO: namespace e2e-tests-container-lifecycle-hook-7bk7p deletion completed in 26.20379494s

• [SLOW TEST:71.108 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:30:29.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:30:39.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5s59t" for this suite.
Dec 23 12:31:25.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:31:26.121: INFO: namespace: e2e-tests-kubelet-test-5s59t, resource: bindings, ignored listing per whitelist
Dec 23 12:31:26.153: INFO: namespace e2e-tests-kubelet-test-5s59t deletion completed in 46.22310386s

• [SLOW TEST:56.541 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:31:26.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:31:26.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:31:39.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7fzrb" for this suite.
Dec 23 12:32:25.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:32:25.415: INFO: namespace: e2e-tests-pods-7fzrb, resource: bindings, ignored listing per whitelist
Dec 23 12:32:25.481: INFO: namespace e2e-tests-pods-7fzrb deletion completed in 46.374828803s

• [SLOW TEST:59.328 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:32:25.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 12:32:25.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kcsv6'
Dec 23 12:32:25.903: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 12:32:25.903: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 23 12:32:25.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-kcsv6'
Dec 23 12:32:26.278: INFO: stderr: ""
Dec 23 12:32:26.278: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:32:26.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kcsv6" for this suite.
Dec 23 12:32:32.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:32:32.623: INFO: namespace: e2e-tests-kubectl-kcsv6, resource: bindings, ignored listing per whitelist
Dec 23 12:32:32.672: INFO: namespace e2e-tests-kubectl-kcsv6 deletion completed in 6.338254157s

• [SLOW TEST:7.190 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:32:32.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:32:32.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-klgj5" for this suite.
Dec 23 12:32:38.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:32:39.136: INFO: namespace: e2e-tests-services-klgj5, resource: bindings, ignored listing per whitelist
Dec 23 12:32:39.159: INFO: namespace e2e-tests-services-klgj5 deletion completed in 6.206135125s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.485 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:32:39.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4eaa2821-2580-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:32:39.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-6ch2q" to be "success or failure"
Dec 23 12:32:39.460: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.430384ms
Dec 23 12:32:42.285: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.845497117s
Dec 23 12:32:44.301: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.862015966s
Dec 23 12:32:46.319: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.879682211s
Dec 23 12:32:48.340: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.900808074s
Dec 23 12:32:50.368: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.929149157s
STEP: Saw pod success
Dec 23 12:32:50.369: INFO: Pod "pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:32:50.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 12:32:50.488: INFO: Waiting for pod pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:32:50.502: INFO: Pod pod-projected-secrets-4eab8066-2580-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:32:50.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6ch2q" for this suite.
Dec 23 12:32:56.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:32:56.771: INFO: namespace: e2e-tests-projected-6ch2q, resource: bindings, ignored listing per whitelist
Dec 23 12:32:56.774: INFO: namespace e2e-tests-projected-6ch2q deletion completed in 6.254870651s

• [SLOW TEST:17.614 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:32:56.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5927f705-2580-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:32:57.050: INFO: Waiting up to 5m0s for pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-twsbp" to be "success or failure"
Dec 23 12:32:57.063: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.034986ms
Dec 23 12:32:59.184: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134148285s
Dec 23 12:33:01.715: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665572178s
Dec 23 12:33:03.756: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.706041016s
Dec 23 12:33:05.917: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.867450624s
Dec 23 12:33:07.972: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.922158565s
STEP: Saw pod success
Dec 23 12:33:07.972: INFO: Pod "pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:33:07.992: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 23 12:33:08.197: INFO: Waiting for pod pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:33:08.242: INFO: Pod pod-secrets-592b8519-2580-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:33:08.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-twsbp" for this suite.
Dec 23 12:33:14.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:33:14.432: INFO: namespace: e2e-tests-secrets-twsbp, resource: bindings, ignored listing per whitelist
Dec 23 12:33:14.534: INFO: namespace e2e-tests-secrets-twsbp deletion completed in 6.275571522s

• [SLOW TEST:17.760 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:33:14.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 23 12:33:14.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k67gx'
Dec 23 12:33:15.040: INFO: stderr: ""
Dec 23 12:33:15.040: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 23 12:33:16.052: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:16.053: INFO: Found 0 / 1
Dec 23 12:33:17.091: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:17.092: INFO: Found 0 / 1
Dec 23 12:33:18.050: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:18.050: INFO: Found 0 / 1
Dec 23 12:33:19.068: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:19.068: INFO: Found 0 / 1
Dec 23 12:33:20.123: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:20.123: INFO: Found 0 / 1
Dec 23 12:33:21.666: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:21.666: INFO: Found 0 / 1
Dec 23 12:33:22.109: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:22.110: INFO: Found 0 / 1
Dec 23 12:33:23.050: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:23.050: INFO: Found 0 / 1
Dec 23 12:33:24.074: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:24.075: INFO: Found 1 / 1
Dec 23 12:33:24.075: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 23 12:33:24.158: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 12:33:24.158: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 23 12:33:24.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx'
Dec 23 12:33:24.410: INFO: stderr: ""
Dec 23 12:33:24.410: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Dec 12:33:22.912 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 12:33:22.912 # Server started, Redis version 3.2.12\n1:M 23 Dec 12:33:22.913 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 12:33:22.913 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 23 12:33:24.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx --tail=1'
Dec 23 12:33:24.582: INFO: stderr: ""
Dec 23 12:33:24.582: INFO: stdout: "1:M 23 Dec 12:33:22.913 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 23 12:33:24.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx --limit-bytes=1'
Dec 23 12:33:24.734: INFO: stderr: ""
Dec 23 12:33:24.734: INFO: stdout: " "
STEP: exposing timestamps
Dec 23 12:33:24.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx --tail=1 --timestamps'
Dec 23 12:33:24.885: INFO: stderr: ""
Dec 23 12:33:24.886: INFO: stdout: "2019-12-23T12:33:22.914069213Z 1:M 23 Dec 12:33:22.913 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 23 12:33:27.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx --since=1s'
Dec 23 12:33:27.642: INFO: stderr: ""
Dec 23 12:33:27.642: INFO: stdout: ""
Dec 23 12:33:27.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p2c6x redis-master --namespace=e2e-tests-kubectl-k67gx --since=24h'
Dec 23 12:33:27.810: INFO: stderr: ""
Dec 23 12:33:27.810: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Dec 12:33:22.912 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 12:33:22.912 # Server started, Redis version 3.2.12\n1:M 23 Dec 12:33:22.913 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 12:33:22.913 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 23 12:33:27.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k67gx'
Dec 23 12:33:28.063: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 12:33:28.063: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 23 12:33:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-k67gx'
Dec 23 12:33:28.237: INFO: stderr: "No resources found.\n"
Dec 23 12:33:28.238: INFO: stdout: ""
Dec 23 12:33:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-k67gx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 12:33:28.381: INFO: stderr: ""
Dec 23 12:33:28.382: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:33:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k67gx" for this suite.
Dec 23 12:33:52.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:33:52.961: INFO: namespace: e2e-tests-kubectl-k67gx, resource: bindings, ignored listing per whitelist
Dec 23 12:33:53.059: INFO: namespace e2e-tests-kubectl-k67gx deletion completed in 24.65298057s

• [SLOW TEST:38.524 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:33:53.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 12:33:53.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wx6p7'
Dec 23 12:33:53.434: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 12:33:53.434: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 23 12:33:53.528: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ts6zd]
Dec 23 12:33:53.529: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ts6zd" in namespace "e2e-tests-kubectl-wx6p7" to be "running and ready"
Dec 23 12:33:53.544: INFO: Pod "e2e-test-nginx-rc-ts6zd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.003406ms
Dec 23 12:33:55.834: INFO: Pod "e2e-test-nginx-rc-ts6zd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305509287s
Dec 23 12:33:57.860: INFO: Pod "e2e-test-nginx-rc-ts6zd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330920204s
Dec 23 12:33:59.914: INFO: Pod "e2e-test-nginx-rc-ts6zd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384935735s
Dec 23 12:34:02.011: INFO: Pod "e2e-test-nginx-rc-ts6zd": Phase="Running", Reason="", readiness=true. Elapsed: 8.48247828s
Dec 23 12:34:02.011: INFO: Pod "e2e-test-nginx-rc-ts6zd" satisfied condition "running and ready"
Dec 23 12:34:02.011: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ts6zd]
Dec 23 12:34:02.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wx6p7'
Dec 23 12:34:02.233: INFO: stderr: ""
Dec 23 12:34:02.234: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 23 12:34:02.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wx6p7'
Dec 23 12:34:02.418: INFO: stderr: ""
Dec 23 12:34:02.419: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:34:02.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wx6p7" for this suite.
Dec 23 12:34:26.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:34:26.765: INFO: namespace: e2e-tests-kubectl-wx6p7, resource: bindings, ignored listing per whitelist
Dec 23 12:34:26.789: INFO: namespace e2e-tests-kubectl-wx6p7 deletion completed in 24.359707253s

• [SLOW TEST:33.731 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:34:26.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1223 12:34:30.690967       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 12:34:30.691: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:34:30.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cvcdn" for this suite.
Dec 23 12:34:38.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:34:38.856: INFO: namespace: e2e-tests-gc-cvcdn, resource: bindings, ignored listing per whitelist
Dec 23 12:34:38.973: INFO: namespace e2e-tests-gc-cvcdn deletion completed in 8.272508359s

• [SLOW TEST:12.183 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:34:38.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 23 12:34:39.419: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pr9fs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pr9fs/configmaps/e2e-watch-test-watch-closed,UID:9626752b-2580-11ea-a994-fa163e34d433,ResourceVersion:15792831,Generation:0,CreationTimestamp:2019-12-23 12:34:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 12:34:39.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pr9fs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pr9fs/configmaps/e2e-watch-test-watch-closed,UID:9626752b-2580-11ea-a994-fa163e34d433,ResourceVersion:15792833,Generation:0,CreationTimestamp:2019-12-23 12:34:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 23 12:34:39.466: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pr9fs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pr9fs/configmaps/e2e-watch-test-watch-closed,UID:9626752b-2580-11ea-a994-fa163e34d433,ResourceVersion:15792834,Generation:0,CreationTimestamp:2019-12-23 12:34:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 12:34:39.466: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pr9fs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pr9fs/configmaps/e2e-watch-test-watch-closed,UID:9626752b-2580-11ea-a994-fa163e34d433,ResourceVersion:15792835,Generation:0,CreationTimestamp:2019-12-23 12:34:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:34:39.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pr9fs" for this suite.
Dec 23 12:34:45.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:34:45.911: INFO: namespace: e2e-tests-watch-pr9fs, resource: bindings, ignored listing per whitelist
Dec 23 12:34:45.974: INFO: namespace e2e-tests-watch-pr9fs deletion completed in 6.244674259s

• [SLOW TEST:7.001 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:34:45.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 23 12:34:46.299: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-vcwd5" to be "success or failure"
Dec 23 12:34:46.369: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 69.752582ms
Dec 23 12:34:48.680: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381042006s
Dec 23 12:34:50.713: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413806402s
Dec 23 12:34:52.885: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58577005s
Dec 23 12:34:54.928: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.628234108s
Dec 23 12:34:56.994: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.695100813s
Dec 23 12:34:59.014: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.715082667s
Dec 23 12:35:01.146: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.846449599s
STEP: Saw pod success
Dec 23 12:35:01.146: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 23 12:35:01.165: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 23 12:35:01.329: INFO: Waiting for pod pod-host-path-test to disappear
Dec 23 12:35:01.349: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:35:01.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-vcwd5" for this suite.
Dec 23 12:35:09.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:35:09.581: INFO: namespace: e2e-tests-hostpath-vcwd5, resource: bindings, ignored listing per whitelist
Dec 23 12:35:09.591: INFO: namespace e2e-tests-hostpath-vcwd5 deletion completed in 8.226918081s

• [SLOW TEST:23.616 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:35:09.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cn7lp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 12:35:09.790: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 12:35:46.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-cn7lp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:35:46.119: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:35:46.710: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:35:46.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-cn7lp" for this suite.
Dec 23 12:36:10.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:36:10.894: INFO: namespace: e2e-tests-pod-network-test-cn7lp, resource: bindings, ignored listing per whitelist
Dec 23 12:36:10.996: INFO: namespace e2e-tests-pod-network-test-cn7lp deletion completed in 24.263020635s

• [SLOW TEST:61.405 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:36:10.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-cce63b4f-2580-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 12:36:11.255: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-tfgf4" to be "success or failure"
Dec 23 12:36:11.260: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.258196ms
Dec 23 12:36:13.282: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026845269s
Dec 23 12:36:15.300: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044790679s
Dec 23 12:36:17.313: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057416717s
Dec 23 12:36:19.329: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073944193s
Dec 23 12:36:21.341: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085353574s
STEP: Saw pod success
Dec 23 12:36:21.341: INFO: Pod "pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:36:21.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 12:36:21.568: INFO: Waiting for pod pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:36:21.586: INFO: Pod pod-projected-secrets-cce71aac-2580-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:36:21.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tfgf4" for this suite.
Dec 23 12:36:27.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:36:27.731: INFO: namespace: e2e-tests-projected-tfgf4, resource: bindings, ignored listing per whitelist
Dec 23 12:36:27.826: INFO: namespace e2e-tests-projected-tfgf4 deletion completed in 6.229006142s

• [SLOW TEST:16.830 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:36:27.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:36:27.990: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 23 12:36:28.072: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 23 12:36:33.092: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 12:36:39.106: INFO: Creating deployment "test-rolling-update-deployment"
Dec 23 12:36:39.137: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 23 12:36:39.229: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 23 12:36:41.266: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 23 12:36:41.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 12:36:43.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 12:36:45.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 12:36:47.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712701399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 12:36:49.287: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 23 12:36:49.314: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-x7pp2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x7pp2/deployments/test-rolling-update-deployment,UID:dd8ab8f2-2580-11ea-a994-fa163e34d433,ResourceVersion:15793142,Generation:1,CreationTimestamp:2019-12-23 12:36:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-23 12:36:39 +0000 UTC 2019-12-23 12:36:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-23 12:36:48 +0000 UTC 2019-12-23 12:36:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 12:36:49.319: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-x7pp2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x7pp2/replicasets/test-rolling-update-deployment-75db98fb4c,UID:dda42adf-2580-11ea-a994-fa163e34d433,ResourceVersion:15793132,Generation:1,CreationTimestamp:2019-12-23 12:36:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment dd8ab8f2-2580-11ea-a994-fa163e34d433 0xc0023017b7 0xc0023017b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 23 12:36:49.320: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 23 12:36:49.320: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-x7pp2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x7pp2/replicasets/test-rolling-update-controller,UID:d6ea8279-2580-11ea-a994-fa163e34d433,ResourceVersion:15793141,Generation:2,CreationTimestamp:2019-12-23 12:36:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment dd8ab8f2-2580-11ea-a994-fa163e34d433 0xc0023016f7 0xc0023016f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 12:36:49.327: INFO: Pod "test-rolling-update-deployment-75db98fb4c-k4sdj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-k4sdj,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-x7pp2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x7pp2/pods/test-rolling-update-deployment-75db98fb4c-k4sdj,UID:ddb23fc3-2580-11ea-a994-fa163e34d433,ResourceVersion:15793131,Generation:0,CreationTimestamp:2019-12-23 12:36:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c dda42adf-2580-11ea-a994-fa163e34d433 0xc001a72157 0xc001a72158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vrqrs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vrqrs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vrqrs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a721c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a721e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:36:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:36:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:36:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 12:36:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-23 12:36:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-23 12:36:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2e18b1b9f4400f80a9c16e5bc9304e87e07f70df6d0eaa20221d3ed2ac31cb5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:36:49.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-x7pp2" for this suite.
Dec 23 12:36:57.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:36:57.851: INFO: namespace: e2e-tests-deployment-x7pp2, resource: bindings, ignored listing per whitelist
Dec 23 12:36:57.893: INFO: namespace e2e-tests-deployment-x7pp2 deletion completed in 8.556187334s

• [SLOW TEST:30.066 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:36:57.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e95155be-2580-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 12:36:58.893: INFO: Waiting up to 5m0s for pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-sq5vz" to be "success or failure"
Dec 23 12:36:58.933: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.923977ms
Dec 23 12:37:01.039: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146416891s
Dec 23 12:37:03.051: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158343727s
Dec 23 12:37:05.085: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192070805s
Dec 23 12:37:07.100: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207201567s
Dec 23 12:37:09.114: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.220967972s
Dec 23 12:37:11.379: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.486285484s
STEP: Saw pod success
Dec 23 12:37:11.380: INFO: Pod "pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:37:11.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 23 12:37:11.651: INFO: Waiting for pod pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:37:11.657: INFO: Pod pod-configmaps-e95308d8-2580-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:37:11.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sq5vz" for this suite.
Dec 23 12:37:17.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:37:17.827: INFO: namespace: e2e-tests-configmap-sq5vz, resource: bindings, ignored listing per whitelist
Dec 23 12:37:17.955: INFO: namespace e2e-tests-configmap-sq5vz deletion completed in 6.248117396s

• [SLOW TEST:20.062 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:37:17.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 23 12:37:18.108: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:37:18.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9jnld" for this suite.
Dec 23 12:37:24.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:37:24.418: INFO: namespace: e2e-tests-kubectl-9jnld, resource: bindings, ignored listing per whitelist
Dec 23 12:37:24.505: INFO: namespace e2e-tests-kubectl-9jnld deletion completed in 6.254505888s

• [SLOW TEST:6.549 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:37:24.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-67rg2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 12:37:24.687: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 12:37:57.121: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-67rg2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 12:37:57.121: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 12:37:57.717: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:37:57.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-67rg2" for this suite.
Dec 23 12:38:21.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:38:21.926: INFO: namespace: e2e-tests-pod-network-test-67rg2, resource: bindings, ignored listing per whitelist
Dec 23 12:38:21.999: INFO: namespace e2e-tests-pod-network-test-67rg2 deletion completed in 24.262674704s

• [SLOW TEST:57.494 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:38:22.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 23 12:38:22.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:24.776: INFO: stderr: ""
Dec 23 12:38:24.776: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 12:38:24.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:24.999: INFO: stderr: ""
Dec 23 12:38:24.999: INFO: stdout: "update-demo-nautilus-knz66 update-demo-nautilus-rk9v7 "
Dec 23 12:38:25.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knz66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:25.182: INFO: stderr: ""
Dec 23 12:38:25.182: INFO: stdout: ""
Dec 23 12:38:25.182: INFO: update-demo-nautilus-knz66 is created but not running
Dec 23 12:38:30.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:30.391: INFO: stderr: ""
Dec 23 12:38:30.392: INFO: stdout: "update-demo-nautilus-knz66 update-demo-nautilus-rk9v7 "
Dec 23 12:38:30.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knz66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:30.552: INFO: stderr: ""
Dec 23 12:38:30.553: INFO: stdout: ""
Dec 23 12:38:30.553: INFO: update-demo-nautilus-knz66 is created but not running
Dec 23 12:38:35.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:35.768: INFO: stderr: ""
Dec 23 12:38:35.769: INFO: stdout: "update-demo-nautilus-knz66 update-demo-nautilus-rk9v7 "
Dec 23 12:38:35.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knz66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:35.973: INFO: stderr: ""
Dec 23 12:38:35.974: INFO: stdout: ""
Dec 23 12:38:35.974: INFO: update-demo-nautilus-knz66 is created but not running
Dec 23 12:38:40.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.181: INFO: stderr: ""
Dec 23 12:38:41.181: INFO: stdout: "update-demo-nautilus-knz66 update-demo-nautilus-rk9v7 "
Dec 23 12:38:41.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knz66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.384: INFO: stderr: ""
Dec 23 12:38:41.384: INFO: stdout: "true"
Dec 23 12:38:41.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knz66 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.503: INFO: stderr: ""
Dec 23 12:38:41.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 12:38:41.503: INFO: validating pod update-demo-nautilus-knz66
Dec 23 12:38:41.569: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 12:38:41.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 12:38:41.569: INFO: update-demo-nautilus-knz66 is verified up and running
Dec 23 12:38:41.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rk9v7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.689: INFO: stderr: ""
Dec 23 12:38:41.689: INFO: stdout: "true"
Dec 23 12:38:41.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rk9v7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.813: INFO: stderr: ""
Dec 23 12:38:41.813: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 12:38:41.813: INFO: validating pod update-demo-nautilus-rk9v7
Dec 23 12:38:41.828: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 12:38:41.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 12:38:41.828: INFO: update-demo-nautilus-rk9v7 is verified up and running
STEP: using delete to clean up resources
Dec 23 12:38:41.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:41.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 12:38:41.989: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 23 12:38:41.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-k995q'
Dec 23 12:38:42.453: INFO: stderr: "No resources found.\n"
Dec 23 12:38:42.453: INFO: stdout: ""
Dec 23 12:38:42.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-k995q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 12:38:42.659: INFO: stderr: ""
Dec 23 12:38:42.659: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:38:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k995q" for this suite.
Dec 23 12:39:06.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:39:06.846: INFO: namespace: e2e-tests-kubectl-k995q, resource: bindings, ignored listing per whitelist
Dec 23 12:39:06.885: INFO: namespace e2e-tests-kubectl-k995q deletion completed in 24.205558614s

• [SLOW TEST:44.885 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:39:06.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 12:39:07.144: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-txn7w" to be "success or failure"
Dec 23 12:39:07.164: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.672337ms
Dec 23 12:39:09.755: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.610633486s
Dec 23 12:39:11.780: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.636213057s
Dec 23 12:39:14.156: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.012542412s
Dec 23 12:39:16.175: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.03109489s
Dec 23 12:39:18.187: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.043123071s
STEP: Saw pod success
Dec 23 12:39:18.187: INFO: Pod "downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:39:18.194: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 12:39:18.275: INFO: Waiting for pod downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:39:18.281: INFO: Pod downwardapi-volume-35bfc36e-2581-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:39:18.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-txn7w" for this suite.
Dec 23 12:39:24.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:39:24.804: INFO: namespace: e2e-tests-downward-api-txn7w, resource: bindings, ignored listing per whitelist
Dec 23 12:39:24.820: INFO: namespace e2e-tests-downward-api-txn7w deletion completed in 6.532817603s

• [SLOW TEST:17.934 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:39:24.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 23 12:39:26.363: INFO: Pod name wrapped-volume-race-4121d0e5-2581-11ea-a9d2-0242ac110005: Found 0 pods out of 5
Dec 23 12:39:31.389: INFO: Pod name wrapped-volume-race-4121d0e5-2581-11ea-a9d2-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4121d0e5-2581-11ea-a9d2-0242ac110005 in namespace e2e-tests-emptydir-wrapper-w8n6v, will wait for the garbage collector to delete the pods
Dec 23 12:41:37.521: INFO: Deleting ReplicationController wrapped-volume-race-4121d0e5-2581-11ea-a9d2-0242ac110005 took: 15.202986ms
Dec 23 12:41:37.921: INFO: Terminating ReplicationController wrapped-volume-race-4121d0e5-2581-11ea-a9d2-0242ac110005 pods took: 400.538814ms
STEP: Creating RC which spawns configmap-volume pods
Dec 23 12:42:32.991: INFO: Pod name wrapped-volume-race-b05eac50-2581-11ea-a9d2-0242ac110005: Found 0 pods out of 5
Dec 23 12:42:38.034: INFO: Pod name wrapped-volume-race-b05eac50-2581-11ea-a9d2-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b05eac50-2581-11ea-a9d2-0242ac110005 in namespace e2e-tests-emptydir-wrapper-w8n6v, will wait for the garbage collector to delete the pods
Dec 23 12:44:52.239: INFO: Deleting ReplicationController wrapped-volume-race-b05eac50-2581-11ea-a9d2-0242ac110005 took: 27.232791ms
Dec 23 12:44:52.640: INFO: Terminating ReplicationController wrapped-volume-race-b05eac50-2581-11ea-a9d2-0242ac110005 pods took: 400.977434ms
STEP: Creating RC which spawns configmap-volume pods
Dec 23 12:45:42.838: INFO: Pod name wrapped-volume-race-2192070a-2582-11ea-a9d2-0242ac110005: Found 0 pods out of 5
Dec 23 12:45:47.890: INFO: Pod name wrapped-volume-race-2192070a-2582-11ea-a9d2-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2192070a-2582-11ea-a9d2-0242ac110005 in namespace e2e-tests-emptydir-wrapper-w8n6v, will wait for the garbage collector to delete the pods
Dec 23 12:48:02.059: INFO: Deleting ReplicationController wrapped-volume-race-2192070a-2582-11ea-a9d2-0242ac110005 took: 28.674167ms
Dec 23 12:48:02.359: INFO: Terminating ReplicationController wrapped-volume-race-2192070a-2582-11ea-a9d2-0242ac110005 pods took: 300.75666ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:48:55.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-w8n6v" for this suite.
Dec 23 12:49:03.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:49:03.549: INFO: namespace: e2e-tests-emptydir-wrapper-w8n6v, resource: bindings, ignored listing per whitelist
Dec 23 12:49:03.584: INFO: namespace e2e-tests-emptydir-wrapper-w8n6v deletion completed in 8.269914942s

• [SLOW TEST:578.763 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:49:03.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 12:49:03.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005" in namespace "e2e-tests-projected-mhgrc" to be "success or failure"
Dec 23 12:49:03.998: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.060229ms
Dec 23 12:49:06.895: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989922948s
Dec 23 12:49:08.947: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.042226974s
Dec 23 12:49:10.966: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.060846826s
Dec 23 12:49:13.462: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.556525355s
Dec 23 12:49:15.490: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.585239326s
Dec 23 12:49:17.506: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.600807509s
STEP: Saw pod success
Dec 23 12:49:17.506: INFO: Pod "downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:49:17.512: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 12:49:18.319: INFO: Waiting for pod downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:49:18.346: INFO: Pod downwardapi-volume-997293f9-2582-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:49:18.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mhgrc" for this suite.
Dec 23 12:49:26.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:49:26.950: INFO: namespace: e2e-tests-projected-mhgrc, resource: bindings, ignored listing per whitelist
Dec 23 12:49:26.958: INFO: namespace e2e-tests-projected-mhgrc deletion completed in 8.602178764s

• [SLOW TEST:23.374 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:49:26.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 23 12:49:27.218: INFO: Waiting up to 5m0s for pod "pod-a7596267-2582-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-6hlr4" to be "success or failure"
Dec 23 12:49:27.283: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.101571ms
Dec 23 12:49:29.459: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240594924s
Dec 23 12:49:31.473: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254251817s
Dec 23 12:49:33.718: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499391811s
Dec 23 12:49:35.749: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530127042s
Dec 23 12:49:37.765: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546037933s
STEP: Saw pod success
Dec 23 12:49:37.765: INFO: Pod "pod-a7596267-2582-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:49:37.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a7596267-2582-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 12:49:37.885: INFO: Waiting for pod pod-a7596267-2582-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:49:37.899: INFO: Pod pod-a7596267-2582-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:49:37.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6hlr4" for this suite.
Dec 23 12:49:43.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:49:44.081: INFO: namespace: e2e-tests-emptydir-6hlr4, resource: bindings, ignored listing per whitelist
Dec 23 12:49:44.142: INFO: namespace e2e-tests-emptydir-6hlr4 deletion completed in 6.233440081s

• [SLOW TEST:17.184 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:49:44.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 23 12:49:44.518: INFO: Waiting up to 5m0s for pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005" in namespace "e2e-tests-var-expansion-wsdn6" to be "success or failure"
Dec 23 12:49:44.594: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.697075ms
Dec 23 12:49:46.612: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093843559s
Dec 23 12:49:48.644: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126513119s
Dec 23 12:49:50.740: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221596613s
Dec 23 12:49:53.574: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.056481665s
Dec 23 12:49:56.451: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.932555317s
STEP: Saw pod success
Dec 23 12:49:56.451: INFO: Pod "var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:49:56.484: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 23 12:49:57.115: INFO: Waiting for pod var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:49:57.125: INFO: Pod var-expansion-b1a5c717-2582-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:49:57.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wsdn6" for this suite.
Dec 23 12:50:03.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:50:03.318: INFO: namespace: e2e-tests-var-expansion-wsdn6, resource: bindings, ignored listing per whitelist
Dec 23 12:50:03.349: INFO: namespace e2e-tests-var-expansion-wsdn6 deletion completed in 6.206386863s

• [SLOW TEST:19.207 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:50:03.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zgpfp
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 23 12:50:03.778: INFO: Found 0 stateful pods, waiting for 3
Dec 23 12:50:14.409: INFO: Found 2 stateful pods, waiting for 3
Dec 23 12:50:23.803: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:50:23.804: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:50:23.804: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 12:50:33.815: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:50:33.816: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:50:33.816: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 23 12:50:33.935: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 23 12:50:44.052: INFO: Updating stateful set ss2
Dec 23 12:50:44.105: INFO: Waiting for Pod e2e-tests-statefulset-zgpfp/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 23 12:50:56.159: INFO: Found 2 stateful pods, waiting for 3
Dec 23 12:51:06.255: INFO: Found 2 stateful pods, waiting for 3
Dec 23 12:51:16.328: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:51:16.328: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:51:16.328: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 12:51:26.184: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:51:26.184: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 12:51:26.184: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 23 12:51:26.243: INFO: Updating stateful set ss2
Dec 23 12:51:26.354: INFO: Waiting for Pod e2e-tests-statefulset-zgpfp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 12:51:36.476: INFO: Updating stateful set ss2
Dec 23 12:51:36.649: INFO: Waiting for StatefulSet e2e-tests-statefulset-zgpfp/ss2 to complete update
Dec 23 12:51:36.649: INFO: Waiting for Pod e2e-tests-statefulset-zgpfp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 12:51:46.691: INFO: Waiting for StatefulSet e2e-tests-statefulset-zgpfp/ss2 to complete update
Dec 23 12:51:46.691: INFO: Waiting for Pod e2e-tests-statefulset-zgpfp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 12:51:56.680: INFO: Waiting for StatefulSet e2e-tests-statefulset-zgpfp/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 23 12:52:06.669: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zgpfp
Dec 23 12:52:06.675: INFO: Scaling statefulset ss2 to 0
Dec 23 12:52:36.747: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 12:52:36.761: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:52:36.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zgpfp" for this suite.
Dec 23 12:52:44.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:52:45.135: INFO: namespace: e2e-tests-statefulset-zgpfp, resource: bindings, ignored listing per whitelist
Dec 23 12:52:45.151: INFO: namespace e2e-tests-statefulset-zgpfp deletion completed in 8.342229394s

• [SLOW TEST:161.801 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:52:45.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1223 12:53:25.935166       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 12:53:25.935: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:53:25.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-crjm5" for this suite.
Dec 23 12:53:42.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:53:43.985: INFO: namespace: e2e-tests-gc-crjm5, resource: bindings, ignored listing per whitelist
Dec 23 12:53:44.612: INFO: namespace e2e-tests-gc-crjm5 deletion completed in 18.649819962s

• [SLOW TEST:59.461 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:53:44.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 23 12:53:45.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-rhl7s" to be "success or failure"
Dec 23 12:53:45.411: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.873224ms
Dec 23 12:53:47.493: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134356441s
Dec 23 12:53:49.625: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266457411s
Dec 23 12:53:52.181: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.822353593s
Dec 23 12:53:54.199: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840752518s
Dec 23 12:53:56.479: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.120547009s
Dec 23 12:53:59.145: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.787289912s
Dec 23 12:54:01.171: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.812801533s
STEP: Saw pod success
Dec 23 12:54:01.171: INFO: Pod "downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:54:01.182: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005 container client-container: 
STEP: delete the pod
Dec 23 12:54:01.622: INFO: Waiting for pod downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:54:01.639: INFO: Pod downwardapi-volume-41383438-2583-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:54:01.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rhl7s" for this suite.
Dec 23 12:54:07.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:54:07.797: INFO: namespace: e2e-tests-downward-api-rhl7s, resource: bindings, ignored listing per whitelist
Dec 23 12:54:07.900: INFO: namespace e2e-tests-downward-api-rhl7s deletion completed in 6.248985644s

• [SLOW TEST:23.287 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:54:07.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:55:31.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-nw42t" for this suite.
Dec 23 12:55:39.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:55:39.502: INFO: namespace: e2e-tests-container-runtime-nw42t, resource: bindings, ignored listing per whitelist
Dec 23 12:55:39.867: INFO: namespace e2e-tests-container-runtime-nw42t deletion completed in 8.451252732s

• [SLOW TEST:91.967 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:55:39.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-frnw
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 12:55:40.219: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-frnw" in namespace "e2e-tests-subpath-gjs6k" to be "success or failure"
Dec 23 12:55:40.387: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 167.65122ms
Dec 23 12:55:42.714: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494777368s
Dec 23 12:55:45.029: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.809722326s
Dec 23 12:55:49.141: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921886564s
Dec 23 12:55:51.169: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.949466377s
Dec 23 12:55:53.188: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.969114075s
Dec 23 12:55:55.213: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.993363427s
Dec 23 12:55:57.281: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.06194619s
Dec 23 12:55:59.312: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092856234s
Dec 23 12:56:01.344: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 21.124826109s
Dec 23 12:56:03.377: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 23.157769815s
Dec 23 12:56:05.423: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 25.20359879s
Dec 23 12:56:07.467: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 27.247463615s
Dec 23 12:56:09.501: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 29.281664781s
Dec 23 12:56:11.512: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 31.292596243s
Dec 23 12:56:13.532: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 33.31286519s
Dec 23 12:56:15.580: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 35.360385946s
Dec 23 12:56:17.609: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Running", Reason="", readiness=false. Elapsed: 37.390139861s
Dec 23 12:56:19.624: INFO: Pod "pod-subpath-test-downwardapi-frnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.40517266s
STEP: Saw pod success
Dec 23 12:56:19.625: INFO: Pod "pod-subpath-test-downwardapi-frnw" satisfied condition "success or failure"
Dec 23 12:56:19.637: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-frnw container test-container-subpath-downwardapi-frnw: 
STEP: delete the pod
Dec 23 12:56:20.446: INFO: Waiting for pod pod-subpath-test-downwardapi-frnw to disappear
Dec 23 12:56:20.973: INFO: Pod pod-subpath-test-downwardapi-frnw no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-frnw
Dec 23 12:56:20.974: INFO: Deleting pod "pod-subpath-test-downwardapi-frnw" in namespace "e2e-tests-subpath-gjs6k"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:56:20.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gjs6k" for this suite.
Dec 23 12:56:29.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:56:29.145: INFO: namespace: e2e-tests-subpath-gjs6k, resource: bindings, ignored listing per whitelist
Dec 23 12:56:29.297: INFO: namespace e2e-tests-subpath-gjs6k deletion completed in 8.286469662s

• [SLOW TEST:49.430 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:56:29.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 23 12:56:29.560: INFO: Waiting up to 5m0s for pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005" in namespace "e2e-tests-containers-8vmkc" to be "success or failure"
Dec 23 12:56:29.571: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.972403ms
Dec 23 12:56:32.031: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470551461s
Dec 23 12:56:34.045: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484921473s
Dec 23 12:56:36.641: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.080601846s
Dec 23 12:56:38.730: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.169904815s
Dec 23 12:56:40.744: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.18327399s
Dec 23 12:56:42.760: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.199696179s
Dec 23 12:56:44.779: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.218609954s
STEP: Saw pod success
Dec 23 12:56:44.779: INFO: Pod "client-containers-a30ea988-2583-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 12:56:44.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a30ea988-2583-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 12:56:45.133: INFO: Waiting for pod client-containers-a30ea988-2583-11ea-a9d2-0242ac110005 to disappear
Dec 23 12:56:45.390: INFO: Pod client-containers-a30ea988-2583-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:56:45.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8vmkc" for this suite.
Dec 23 12:56:53.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:56:53.563: INFO: namespace: e2e-tests-containers-8vmkc, resource: bindings, ignored listing per whitelist
Dec 23 12:56:54.050: INFO: namespace e2e-tests-containers-8vmkc deletion completed in 8.639701963s

• [SLOW TEST:24.752 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:56:54.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:56:54.319: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:56:55.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-c4hd8" for this suite.
Dec 23 12:57:01.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:57:01.759: INFO: namespace: e2e-tests-custom-resource-definition-c4hd8, resource: bindings, ignored listing per whitelist
Dec 23 12:57:02.297: INFO: namespace e2e-tests-custom-resource-definition-c4hd8 deletion completed in 6.63742597s

• [SLOW TEST:8.246 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:57:02.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 12:57:02.679: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 23 12:57:02.698: INFO: Number of nodes with available pods: 0
Dec 23 12:57:02.698: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 23 12:57:02.759: INFO: Number of nodes with available pods: 0
Dec 23 12:57:02.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:03.777: INFO: Number of nodes with available pods: 0
Dec 23 12:57:03.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:04.791: INFO: Number of nodes with available pods: 0
Dec 23 12:57:04.791: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:05.787: INFO: Number of nodes with available pods: 0
Dec 23 12:57:05.787: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:06.765: INFO: Number of nodes with available pods: 0
Dec 23 12:57:06.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:08.304: INFO: Number of nodes with available pods: 0
Dec 23 12:57:08.304: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:08.901: INFO: Number of nodes with available pods: 0
Dec 23 12:57:08.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:09.855: INFO: Number of nodes with available pods: 0
Dec 23 12:57:09.856: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:10.830: INFO: Number of nodes with available pods: 1
Dec 23 12:57:10.830: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 23 12:57:10.931: INFO: Number of nodes with available pods: 1
Dec 23 12:57:10.932: INFO: Number of running nodes: 0, number of available pods: 1
Dec 23 12:57:11.952: INFO: Number of nodes with available pods: 0
Dec 23 12:57:11.952: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 23 12:57:11.989: INFO: Number of nodes with available pods: 0
Dec 23 12:57:11.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:13.008: INFO: Number of nodes with available pods: 0
Dec 23 12:57:13.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:14.003: INFO: Number of nodes with available pods: 0
Dec 23 12:57:14.003: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:15.021: INFO: Number of nodes with available pods: 0
Dec 23 12:57:15.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:16.007: INFO: Number of nodes with available pods: 0
Dec 23 12:57:16.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:17.726: INFO: Number of nodes with available pods: 0
Dec 23 12:57:17.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:18.377: INFO: Number of nodes with available pods: 0
Dec 23 12:57:18.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:19.003: INFO: Number of nodes with available pods: 0
Dec 23 12:57:19.003: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:20.081: INFO: Number of nodes with available pods: 0
Dec 23 12:57:20.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:21.071: INFO: Number of nodes with available pods: 0
Dec 23 12:57:21.072: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:22.799: INFO: Number of nodes with available pods: 0
Dec 23 12:57:22.799: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:24.332: INFO: Number of nodes with available pods: 0
Dec 23 12:57:24.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:27.005: INFO: Number of nodes with available pods: 0
Dec 23 12:57:27.005: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:28.005: INFO: Number of nodes with available pods: 0
Dec 23 12:57:28.005: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:29.018: INFO: Number of nodes with available pods: 0
Dec 23 12:57:29.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:30.006: INFO: Number of nodes with available pods: 0
Dec 23 12:57:30.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:32.717: INFO: Number of nodes with available pods: 0
Dec 23 12:57:32.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:33.543: INFO: Number of nodes with available pods: 0
Dec 23 12:57:33.543: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:34.008: INFO: Number of nodes with available pods: 0
Dec 23 12:57:34.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:35.951: INFO: Number of nodes with available pods: 0
Dec 23 12:57:35.951: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:36.007: INFO: Number of nodes with available pods: 0
Dec 23 12:57:36.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:37.006: INFO: Number of nodes with available pods: 0
Dec 23 12:57:37.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:38.005: INFO: Number of nodes with available pods: 0
Dec 23 12:57:38.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 23 12:57:39.005: INFO: Number of nodes with available pods: 1
Dec 23 12:57:39.005: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tlnm6, will wait for the garbage collector to delete the pods
Dec 23 12:57:39.103: INFO: Deleting DaemonSet.extensions daemon-set took: 11.667536ms
Dec 23 12:57:39.203: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.390899ms
Dec 23 12:57:52.712: INFO: Number of nodes with available pods: 0
Dec 23 12:57:52.712: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 12:57:52.717: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tlnm6/daemonsets","resourceVersion":"15795934"},"items":null}

Dec 23 12:57:52.720: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tlnm6/pods","resourceVersion":"15795934"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 12:57:52.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-tlnm6" for this suite.
Dec 23 12:58:01.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 12:58:01.106: INFO: namespace: e2e-tests-daemonsets-tlnm6, resource: bindings, ignored listing per whitelist
Dec 23 12:58:01.179: INFO: namespace e2e-tests-daemonsets-tlnm6 deletion completed in 8.403712699s

• [SLOW TEST:58.882 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 12:58:01.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gzlzc
Dec 23 12:58:15.440: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gzlzc
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 12:58:15.451: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:02:16.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gzlzc" for this suite.
Dec 23 13:02:22.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:02:22.770: INFO: namespace: e2e-tests-container-probe-gzlzc, resource: bindings, ignored listing per whitelist
Dec 23 13:02:22.896: INFO: namespace e2e-tests-container-probe-gzlzc deletion completed in 6.216266659s

• [SLOW TEST:261.717 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:02:22.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 23 13:02:23.187: INFO: Waiting up to 5m0s for pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005" in namespace "e2e-tests-var-expansion-8w4pw" to be "success or failure"
Dec 23 13:02:23.213: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.19599ms
Dec 23 13:02:25.225: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037780339s
Dec 23 13:02:27.241: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053866341s
Dec 23 13:02:29.641: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454316282s
Dec 23 13:02:32.558: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.370978249s
Dec 23 13:02:34.596: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.409356568s
STEP: Saw pod success
Dec 23 13:02:34.596: INFO: Pod "var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:02:34.614: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 23 13:02:34.895: INFO: Waiting for pod var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:02:34.957: INFO: Pod var-expansion-75c98ee0-2584-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:02:34.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-8w4pw" for this suite.
Dec 23 13:02:41.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:02:41.175: INFO: namespace: e2e-tests-var-expansion-8w4pw, resource: bindings, ignored listing per whitelist
Dec 23 13:02:41.186: INFO: namespace e2e-tests-var-expansion-8w4pw deletion completed in 6.211883183s

• [SLOW TEST:18.290 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:02:41.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 23 13:02:41.511: INFO: Waiting up to 5m0s for pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-g8fsr" to be "success or failure"
Dec 23 13:02:41.720: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 208.81899ms
Dec 23 13:02:43.743: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23176413s
Dec 23 13:02:45.760: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248582058s
Dec 23 13:02:47.787: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275570876s
Dec 23 13:02:49.798: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285993973s
Dec 23 13:02:51.863: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.351810929s
STEP: Saw pod success
Dec 23 13:02:51.864: INFO: Pod "pod-80c8ebc3-2584-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:02:51.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-80c8ebc3-2584-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 13:02:52.049: INFO: Waiting for pod pod-80c8ebc3-2584-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:02:52.271: INFO: Pod pod-80c8ebc3-2584-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:02:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g8fsr" for this suite.
Dec 23 13:02:58.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:02:58.903: INFO: namespace: e2e-tests-emptydir-g8fsr, resource: bindings, ignored listing per whitelist
Dec 23 13:02:58.956: INFO: namespace e2e-tests-emptydir-g8fsr deletion completed in 6.653019798s

• [SLOW TEST:17.770 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:02:58.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 23 13:02:59.324: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 13:02:59.342: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 13:02:59.395: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 23 13:02:59.419: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:02:59.419: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 23 13:02:59.419: INFO: 	Container weave ready: true, restart count 0
Dec 23 13:02:59.419: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 13:02:59.419: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 23 13:02:59.419: INFO: 	Container coredns ready: true, restart count 0
Dec 23 13:02:59.419: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:02:59.419: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:02:59.419: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:02:59.419: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 23 13:02:59.419: INFO: 	Container coredns ready: true, restart count 0
Dec 23 13:02:59.419: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 23 13:02:59.419: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-92b7fbaa-2584-11ea-a9d2-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-92b7fbaa-2584-11ea-a9d2-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-92b7fbaa-2584-11ea-a9d2-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:03:26.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-sv82q" for this suite.
Dec 23 13:03:46.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:03:46.275: INFO: namespace: e2e-tests-sched-pred-sv82q, resource: bindings, ignored listing per whitelist
Dec 23 13:03:46.348: INFO: namespace e2e-tests-sched-pred-sv82q deletion completed in 20.315275788s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.392 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:03:46.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1223 13:03:56.958699       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 13:03:56.959: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:03:56.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xdrdh" for this suite.
Dec 23 13:04:03.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:04:03.166: INFO: namespace: e2e-tests-gc-xdrdh, resource: bindings, ignored listing per whitelist
Dec 23 13:04:03.192: INFO: namespace e2e-tests-gc-xdrdh deletion completed in 6.222438537s

• [SLOW TEST:16.843 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:04:03.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-sbvn
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 13:04:03.592: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sbvn" in namespace "e2e-tests-subpath-zczbt" to be "success or failure"
Dec 23 13:04:03.648: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 56.155246ms
Dec 23 13:04:05.699: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106751144s
Dec 23 13:04:08.060: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468183911s
Dec 23 13:04:10.081: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489407279s
Dec 23 13:04:14.208: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61587052s
Dec 23 13:04:16.247: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.655410189s
Dec 23 13:04:18.260: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.668667851s
Dec 23 13:04:20.304: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.712265706s
Dec 23 13:04:24.121: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.529459792s
Dec 23 13:04:26.176: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.584501516s
Dec 23 13:04:29.158: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 25.566226258s
Dec 23 13:04:31.213: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 27.620986466s
Dec 23 13:04:33.231: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 29.639406625s
Dec 23 13:04:35.316: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.723754559s
Dec 23 13:04:37.337: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 33.744867839s
Dec 23 13:04:39.351: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 35.759495967s
Dec 23 13:04:41.377: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 37.785684329s
Dec 23 13:04:43.395: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 39.803334762s
Dec 23 13:04:45.404: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 41.812323889s
Dec 23 13:04:47.427: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 43.835167711s
Dec 23 13:04:49.459: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Running", Reason="", readiness=false. Elapsed: 45.867011467s
Dec 23 13:04:51.475: INFO: Pod "pod-subpath-test-configmap-sbvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 47.883696623s
STEP: Saw pod success
Dec 23 13:04:51.476: INFO: Pod "pod-subpath-test-configmap-sbvn" satisfied condition "success or failure"
Dec 23 13:04:51.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-sbvn container test-container-subpath-configmap-sbvn: 
STEP: delete the pod
Dec 23 13:04:52.924: INFO: Waiting for pod pod-subpath-test-configmap-sbvn to disappear
Dec 23 13:04:52.955: INFO: Pod pod-subpath-test-configmap-sbvn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-sbvn
Dec 23 13:04:52.955: INFO: Deleting pod "pod-subpath-test-configmap-sbvn" in namespace "e2e-tests-subpath-zczbt"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:04:53.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zczbt" for this suite.
Dec 23 13:05:01.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:05:01.737: INFO: namespace: e2e-tests-subpath-zczbt, resource: bindings, ignored listing per whitelist
Dec 23 13:05:01.753: INFO: namespace e2e-tests-subpath-zczbt deletion completed in 8.405256746s

• [SLOW TEST:58.561 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:05:01.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 23 13:05:30.891: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:30.925: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:32.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:32.975: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:34.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:35.005: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:36.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:36.956: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:38.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:38.969: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:40.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:40.953: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 13:05:42.926: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 13:05:42.948: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:05:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xrhkf" for this suite.
Dec 23 13:06:09.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:06:10.027: INFO: namespace: e2e-tests-container-lifecycle-hook-xrhkf, resource: bindings, ignored listing per whitelist
Dec 23 13:06:10.426: INFO: namespace e2e-tests-container-lifecycle-hook-xrhkf deletion completed in 27.434655749s

• [SLOW TEST:68.672 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:06:10.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-fd9ce31e-2584-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 23 13:06:10.951: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005" in namespace "e2e-tests-configmap-b859w" to be "success or failure"
Dec 23 13:06:11.176: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 224.903195ms
Dec 23 13:06:13.321: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369935806s
Dec 23 13:06:15.337: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386282875s
Dec 23 13:06:17.364: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413226888s
Dec 23 13:06:19.849: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.898213744s
Dec 23 13:06:21.925: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.973711312s
Dec 23 13:06:24.000: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.049305662s
STEP: Saw pod success
Dec 23 13:06:24.001: INFO: Pod "pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:06:24.063: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 23 13:06:24.353: INFO: Waiting for pod pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:06:24.485: INFO: Pod pod-configmaps-fd9e7011-2584-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:06:24.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-b859w" for this suite.
Dec 23 13:06:32.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:06:32.789: INFO: namespace: e2e-tests-configmap-b859w, resource: bindings, ignored listing per whitelist
Dec 23 13:06:32.990: INFO: namespace e2e-tests-configmap-b859w deletion completed in 8.465355527s

• [SLOW TEST:22.564 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:06:32.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 23 13:06:33.313: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 23 13:06:38.340: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 13:06:46.394: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 23 13:06:46.638: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-sk5gr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sk5gr/deployments/test-cleanup-deployment,UID:12c6ebcb-2585-11ea-a994-fa163e34d433,ResourceVersion:15796821,Generation:1,CreationTimestamp:2019-12-23 13:06:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 23 13:06:46.653: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:06:46.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-sk5gr" for this suite.
Dec 23 13:06:55.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:06:55.593: INFO: namespace: e2e-tests-deployment-sk5gr, resource: bindings, ignored listing per whitelist
Dec 23 13:06:55.602: INFO: namespace e2e-tests-deployment-sk5gr deletion completed in 8.925623955s

• [SLOW TEST:22.612 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:06:55.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 23 13:06:55.793: INFO: Waiting up to 5m0s for pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-hcd9v" to be "success or failure"
Dec 23 13:06:56.804: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.011278803s
Dec 23 13:06:59.361: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568593183s
Dec 23 13:07:01.393: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.599787762s
Dec 23 13:07:03.697: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.904608536s
Dec 23 13:07:05.763: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.970002101s
Dec 23 13:07:07.798: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.004759922s
Dec 23 13:07:09.828: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034838518s
Dec 23 13:07:11.985: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.192322294s
STEP: Saw pod success
Dec 23 13:07:11.985: INFO: Pod "pod-18598a6d-2585-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:07:12.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-18598a6d-2585-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 13:07:12.885: INFO: Waiting for pod pod-18598a6d-2585-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:07:12.892: INFO: Pod pod-18598a6d-2585-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:07:12.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hcd9v" for this suite.
Dec 23 13:07:21.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:07:21.086: INFO: namespace: e2e-tests-emptydir-hcd9v, resource: bindings, ignored listing per whitelist
Dec 23 13:07:21.187: INFO: namespace e2e-tests-emptydir-hcd9v deletion completed in 8.287941917s

• [SLOW TEST:25.585 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:07:21.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 13:07:21.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h986h'
Dec 23 13:07:23.598: INFO: stderr: ""
Dec 23 13:07:23.598: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 23 13:07:38.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h986h -o json'
Dec 23 13:07:38.845: INFO: stderr: ""
Dec 23 13:07:38.846: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-23T13:07:23Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-h986h\",\n        \"resourceVersion\": \"15796933\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-h986h/pods/e2e-test-nginx-pod\",\n        \"uid\": \"28ead9a7-2585-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-m6299\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-m6299\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-m6299\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-23T13:07:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-23T13:07:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-23T13:07:33Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-23T13:07:23Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://90f548a9939ec4634fd1dad9eca91962c7eb78189eafd7819812f582cfebf5f3\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-23T13:07:32Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-23T13:07:23Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 23 13:07:38.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-h986h'
Dec 23 13:07:39.525: INFO: stderr: ""
Dec 23 13:07:39.525: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 23 13:07:39.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h986h'
Dec 23 13:07:49.627: INFO: stderr: ""
Dec 23 13:07:49.628: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:07:49.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h986h" for this suite.
Dec 23 13:07:55.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:07:55.975: INFO: namespace: e2e-tests-kubectl-h986h, resource: bindings, ignored listing per whitelist
Dec 23 13:07:56.105: INFO: namespace e2e-tests-kubectl-h986h deletion completed in 6.303753604s

• [SLOW TEST:34.918 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:07:56.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 23 13:07:56.589: INFO: Waiting up to 5m0s for pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005" in namespace "e2e-tests-emptydir-k7hv9" to be "success or failure"
Dec 23 13:07:56.642: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.756727ms
Dec 23 13:07:59.032: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.442333986s
Dec 23 13:08:01.047: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457550635s
Dec 23 13:08:03.155: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56563711s
Dec 23 13:08:05.177: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.587056166s
Dec 23 13:08:07.206: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.616238524s
Dec 23 13:08:09.236: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.646065217s
Dec 23 13:08:11.270: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.680584131s
STEP: Saw pod success
Dec 23 13:08:11.270: INFO: Pod "pod-3c836f29-2585-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:08:11.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3c836f29-2585-11ea-a9d2-0242ac110005 container test-container: 
STEP: delete the pod
Dec 23 13:08:12.326: INFO: Waiting for pod pod-3c836f29-2585-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:08:12.477: INFO: Pod pod-3c836f29-2585-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:08:12.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k7hv9" for this suite.
Dec 23 13:08:18.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:08:18.877: INFO: namespace: e2e-tests-emptydir-k7hv9, resource: bindings, ignored listing per whitelist
Dec 23 13:08:18.920: INFO: namespace e2e-tests-emptydir-k7hv9 deletion completed in 6.42782032s

• [SLOW TEST:22.814 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:08:18.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4a10ea26-2585-11ea-a9d2-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 23 13:08:19.627: INFO: Waiting up to 5m0s for pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005" in namespace "e2e-tests-secrets-djqd5" to be "success or failure"
Dec 23 13:08:19.773: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 145.112884ms
Dec 23 13:08:22.229: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600959021s
Dec 23 13:08:24.250: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622350947s
Dec 23 13:08:26.514: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886221639s
Dec 23 13:08:28.767: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.13933073s
Dec 23 13:08:30.781: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.153164391s
Dec 23 13:08:32.796: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.168289802s
STEP: Saw pod success
Dec 23 13:08:32.796: INFO: Pod "pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:08:32.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 23 13:08:33.350: INFO: Waiting for pod pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:08:33.848: INFO: Pod pod-secrets-4a4e27a3-2585-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:08:33.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-djqd5" for this suite.
Dec 23 13:08:40.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:08:40.270: INFO: namespace: e2e-tests-secrets-djqd5, resource: bindings, ignored listing per whitelist
Dec 23 13:08:40.370: INFO: namespace e2e-tests-secrets-djqd5 deletion completed in 6.49427955s
STEP: Destroying namespace "e2e-tests-secret-namespace-plnhl" for this suite.
Dec 23 13:08:46.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:08:46.673: INFO: namespace: e2e-tests-secret-namespace-plnhl, resource: bindings, ignored listing per whitelist
Dec 23 13:08:46.763: INFO: namespace e2e-tests-secret-namespace-plnhl deletion completed in 6.393175923s

• [SLOW TEST:27.843 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:08:46.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1223 13:09:03.636121       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 13:09:03.636: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:09:03.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zz4xl" for this suite.
Dec 23 13:09:34.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:09:34.924: INFO: namespace: e2e-tests-gc-zz4xl, resource: bindings, ignored listing per whitelist
Dec 23 13:09:34.938: INFO: namespace e2e-tests-gc-zz4xl deletion completed in 31.281778599s

• [SLOW TEST:48.174 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:09:34.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 23 13:09:36.253: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 13:09:36.350: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 13:09:36.365: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 23 13:09:36.400: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 23 13:09:36.401: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 13:09:36.401: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:09:36.401: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 23 13:09:36.401: INFO: 	Container weave ready: true, restart count 0
Dec 23 13:09:36.401: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 13:09:36.401: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 23 13:09:36.401: INFO: 	Container coredns ready: true, restart count 0
Dec 23 13:09:36.401: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:09:36.401: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:09:36.401: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 23 13:09:36.401: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 23 13:09:36.401: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.526: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 23 13:09:36.527: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-782c5b3a-2585-11ea-a9d2-0242ac110005.15e3021359ef4c9f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-7qkg7/filler-pod-782c5b3a-2585-11ea-a9d2-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-782c5b3a-2585-11ea-a9d2-0242ac110005.15e30214a849e032], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-782c5b3a-2585-11ea-a9d2-0242ac110005.15e30215aff43cb7], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-782c5b3a-2585-11ea-a9d2-0242ac110005.15e30215ff14042f], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e30216a379823d], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:09:51.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7qkg7" for this suite.
Dec 23 13:10:02.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:10:02.271: INFO: namespace: e2e-tests-sched-pred-7qkg7, resource: bindings, ignored listing per whitelist
Dec 23 13:10:02.295: INFO: namespace e2e-tests-sched-pred-7qkg7 deletion completed in 10.311785272s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:27.357 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 23 13:10:02.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 23 13:10:02.641: INFO: Waiting up to 5m0s for pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005" in namespace "e2e-tests-downward-api-745mz" to be "success or failure"
Dec 23 13:10:02.658: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.673373ms
Dec 23 13:10:04.683: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041610019s
Dec 23 13:10:06.729: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088296256s
Dec 23 13:10:09.611: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.970066409s
Dec 23 13:10:11.633: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.991992114s
Dec 23 13:10:13.675: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.034135768s
STEP: Saw pod success
Dec 23 13:10:13.675: INFO: Pod "downward-api-87ba562e-2585-11ea-a9d2-0242ac110005" satisfied condition "success or failure"
Dec 23 13:10:13.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-87ba562e-2585-11ea-a9d2-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 23 13:10:13.856: INFO: Waiting for pod downward-api-87ba562e-2585-11ea-a9d2-0242ac110005 to disappear
Dec 23 13:10:13.908: INFO: Pod downward-api-87ba562e-2585-11ea-a9d2-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 23 13:10:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-745mz" for this suite.
Dec 23 13:10:20.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:10:20.134: INFO: namespace: e2e-tests-downward-api-745mz, resource: bindings, ignored listing per whitelist
Dec 23 13:10:20.224: INFO: namespace e2e-tests-downward-api-745mz deletion completed in 6.299076617s

• [SLOW TEST:17.928 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSDec 23 13:10:20.225: INFO: Running AfterSuite actions on all nodes
Dec 23 13:10:20.225: INFO: Running AfterSuite actions on node 1
Dec 23 13:10:20.225: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8581.884 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS