I1217 10:47:16.948746 8 e2e.go:224] Starting e2e run "97168600-20ba-11ea-a5ef-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576579635 - Will randomize all specs Will run 201 of 2164 specs Dec 17 10:47:17.183: INFO: >>> kubeConfig: /root/.kube/config Dec 17 10:47:17.190: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 17 10:47:17.207: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 17 10:47:17.231: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 17 10:47:17.231: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 17 10:47:17.231: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 17 10:47:17.238: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 17 10:47:17.238: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 17 10:47:17.238: INFO: e2e test version: v1.13.12 Dec 17 10:47:17.239: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:47:17.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion Dec 17 10:47:17.434: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Dec 17 10:47:17.549: INFO: Waiting up to 5m0s for pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004" in namespace "e2e-tests-var-expansion-dq4zj" to be "success or failure" Dec 17 10:47:17.590: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 40.581602ms Dec 17 10:47:19.608: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058989936s Dec 17 10:47:21.623: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073391034s Dec 17 10:47:23.671: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121706867s Dec 17 10:47:25.861: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311173666s Dec 17 10:47:27.896: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.347095481s STEP: Saw pod success Dec 17 10:47:27.897: INFO: Pod "var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 10:47:27.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004 container dapi-container: STEP: delete the pod Dec 17 10:47:28.015: INFO: Waiting for pod var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004 to disappear Dec 17 10:47:30.905: INFO: Pod var-expansion-98003fd8-20ba-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:47:30.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-dq4zj" for this suite. Dec 17 10:47:37.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:47:37.391: INFO: namespace: e2e-tests-var-expansion-dq4zj, resource: bindings, ignored listing per whitelist Dec 17 10:47:37.623: INFO: namespace e2e-tests-var-expansion-dq4zj deletion completed in 6.37611485s • [SLOW TEST:20.384 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:47:37.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 10:47:37.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-wrkzf' Dec 17 10:47:39.675: INFO: stderr: "" Dec 17 10:47:39.675: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Dec 17 10:47:39.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wrkzf' Dec 17 10:47:42.669: INFO: stderr: "" Dec 17 10:47:42.669: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:47:42.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wrkzf" for this suite. Dec 17 10:47:48.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:47:49.025: INFO: namespace: e2e-tests-kubectl-wrkzf, resource: bindings, ignored listing per whitelist Dec 17 10:47:49.078: INFO: namespace e2e-tests-kubectl-wrkzf deletion completed in 6.309643609s • [SLOW TEST:11.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:47:49.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 17 10:47:50.110: INFO: Pod name wrapped-volume-race-ab667fe1-20ba-11ea-a5ef-0242ac110004: Found 0 pods out of 5 Dec 17 10:47:55.162: INFO: Pod name wrapped-volume-race-ab667fe1-20ba-11ea-a5ef-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab667fe1-20ba-11ea-a5ef-0242ac110004 in namespace e2e-tests-emptydir-wrapper-pg48n, will wait for the garbage collector to delete the pods Dec 17 10:49:37.332: INFO: Deleting ReplicationController wrapped-volume-race-ab667fe1-20ba-11ea-a5ef-0242ac110004 took: 21.591014ms Dec 17 10:49:37.833: INFO: Terminating ReplicationController wrapped-volume-race-ab667fe1-20ba-11ea-a5ef-0242ac110004 pods took: 500.643ms STEP: Creating RC which spawns configmap-volume pods Dec 17 10:50:23.836: INFO: Pod name wrapped-volume-race-06fb851d-20bb-11ea-a5ef-0242ac110004: Found 0 pods out of 5 Dec 17 10:50:28.875: INFO: Pod name wrapped-volume-race-06fb851d-20bb-11ea-a5ef-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-06fb851d-20bb-11ea-a5ef-0242ac110004 in namespace e2e-tests-emptydir-wrapper-pg48n, will wait for the garbage collector to delete the pods Dec 17 10:52:45.087: INFO: Deleting ReplicationController wrapped-volume-race-06fb851d-20bb-11ea-a5ef-0242ac110004 took: 20.079075ms Dec 17 10:52:45.488: INFO: Terminating ReplicationController wrapped-volume-race-06fb851d-20bb-11ea-a5ef-0242ac110004 pods took: 401.160723ms STEP: Creating RC which spawns configmap-volume pods Dec 17 10:53:33.438: INFO: Pod name wrapped-volume-race-780699e3-20bb-11ea-a5ef-0242ac110004: Found 0 pods out of 5 Dec 17 10:53:38.487: INFO: Pod name wrapped-volume-race-780699e3-20bb-11ea-a5ef-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-780699e3-20bb-11ea-a5ef-0242ac110004 in namespace e2e-tests-emptydir-wrapper-pg48n, will wait for the garbage collector to delete the pods Dec 17 10:55:54.879: INFO: Deleting ReplicationController wrapped-volume-race-780699e3-20bb-11ea-a5ef-0242ac110004 took: 55.596798ms Dec 17 10:55:55.780: INFO: Terminating ReplicationController wrapped-volume-race-780699e3-20bb-11ea-a5ef-0242ac110004 pods took: 900.983588ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:56:44.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-pg48n" for this suite. Dec 17 10:56:54.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:56:54.862: INFO: namespace: e2e-tests-emptydir-wrapper-pg48n, resource: bindings, ignored listing per whitelist Dec 17 10:56:55.039: INFO: namespace e2e-tests-emptydir-wrapper-pg48n deletion completed in 10.233577987s • [SLOW TEST:545.960 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:56:55.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 17 10:56:55.309: INFO: Waiting up to 5m0s for pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-c8l6x" to be "success or failure" Dec 17 10:56:55.318: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.906469ms Dec 17 10:56:57.602: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29374958s Dec 17 10:57:00.374: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.065711457s Dec 17 10:57:02.393: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084765651s Dec 17 10:57:05.771: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.461957565s Dec 17 10:57:07.790: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.481163473s Dec 17 10:57:09.808: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.499397522s STEP: Saw pod success Dec 17 10:57:09.808: INFO: Pod "pod-f06deec2-20bb-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 10:57:09.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f06deec2-20bb-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 10:57:11.003: INFO: Waiting for pod pod-f06deec2-20bb-11ea-a5ef-0242ac110004 to disappear Dec 17 10:57:11.012: INFO: Pod pod-f06deec2-20bb-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:57:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-c8l6x" for this suite. Dec 17 10:57:17.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:57:17.191: INFO: namespace: e2e-tests-emptydir-c8l6x, resource: bindings, ignored listing per whitelist Dec 17 10:57:17.349: INFO: namespace e2e-tests-emptydir-c8l6x deletion completed in 6.327627126s • [SLOW TEST:22.310 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:57:17.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-fdc7522d-20bb-11ea-a5ef-0242ac110004 STEP: Creating secret with name s-test-opt-upd-fdc752f3-20bb-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fdc7522d-20bb-11ea-a5ef-0242ac110004 STEP: Updating secret s-test-opt-upd-fdc752f3-20bb-11ea-a5ef-0242ac110004 STEP: Creating secret with name s-test-opt-create-fdc75313-20bb-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:57:36.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n5b2v" for this suite. Dec 17 10:58:02.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:58:02.537: INFO: namespace: e2e-tests-secrets-n5b2v, resource: bindings, ignored listing per whitelist Dec 17 10:58:02.548: INFO: namespace e2e-tests-secrets-n5b2v deletion completed in 26.291313944s • [SLOW TEST:45.199 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:58:02.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 17 10:58:13.480: INFO: Successfully updated pod "annotationupdate189f7e1e-20bc-11ea-a5ef-0242ac110004" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:58:15.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lpwfv" for this suite. Dec 17 10:58:39.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:58:39.725: INFO: namespace: e2e-tests-downward-api-lpwfv, resource: bindings, ignored listing per whitelist Dec 17 10:58:39.870: INFO: namespace e2e-tests-downward-api-lpwfv deletion completed in 24.319156824s • [SLOW TEST:37.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:58:39.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Dec 17 10:58:40.051: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-k7wwh" to be "success or failure" Dec 17 10:58:40.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.275778ms Dec 17 10:58:42.136: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085291409s Dec 17 10:58:44.158: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107188609s Dec 17 10:58:46.263: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212488482s Dec 17 10:58:48.346: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295694371s Dec 17 10:58:50.715: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663934576s Dec 17 10:58:52.723: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.672087069s Dec 17 10:58:54.739: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.688326862s Dec 17 10:58:56.760: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.708735397s STEP: Saw pod success Dec 17 10:58:56.760: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 17 10:58:56.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 17 10:58:56.847: INFO: Waiting for pod pod-host-path-test to disappear Dec 17 10:58:56.864: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:58:56.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-k7wwh" for this suite. Dec 17 10:59:02.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:59:03.076: INFO: namespace: e2e-tests-hostpath-k7wwh, resource: bindings, ignored listing per whitelist Dec 17 10:59:03.157: INFO: namespace e2e-tests-hostpath-k7wwh deletion completed in 6.275132258s • [SLOW TEST:23.286 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:59:03.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-m4bx STEP: Creating a pod to test atomic-volume-subpath Dec 17 10:59:03.596: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-m4bx" in namespace "e2e-tests-subpath-ppvkt" to be "success or failure" Dec 17 10:59:03.622: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 25.911543ms Dec 17 10:59:06.347: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.751293592s Dec 17 10:59:08.388: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.792351282s Dec 17 10:59:10.399: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802678346s Dec 17 10:59:12.411: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.815310245s Dec 17 10:59:14.426: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.830404942s Dec 17 10:59:17.337: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.740838939s Dec 17 10:59:19.498: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Pending", Reason="", readiness=false. Elapsed: 15.902326443s Dec 17 10:59:21.515: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 17.918862849s Dec 17 10:59:23.533: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 19.93736914s Dec 17 10:59:25.597: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 22.001008974s Dec 17 10:59:27.610: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 24.014234596s Dec 17 10:59:29.626: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 26.030334371s Dec 17 10:59:31.642: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 28.046468522s Dec 17 10:59:33.661: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 30.064851118s Dec 17 10:59:35.677: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Running", Reason="", readiness=false. Elapsed: 32.081422336s Dec 17 10:59:37.740: INFO: Pod "pod-subpath-test-projected-m4bx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.144530871s STEP: Saw pod success Dec 17 10:59:37.741: INFO: Pod "pod-subpath-test-projected-m4bx" satisfied condition "success or failure" Dec 17 10:59:37.753: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-m4bx container test-container-subpath-projected-m4bx: STEP: delete the pod Dec 17 10:59:38.021: INFO: Waiting for pod pod-subpath-test-projected-m4bx to disappear Dec 17 10:59:38.037: INFO: Pod pod-subpath-test-projected-m4bx no longer exists STEP: Deleting pod pod-subpath-test-projected-m4bx Dec 17 10:59:38.037: INFO: Deleting pod "pod-subpath-test-projected-m4bx" in namespace "e2e-tests-subpath-ppvkt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:59:38.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ppvkt" for this suite. Dec 17 10:59:46.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 10:59:46.310: INFO: namespace: e2e-tests-subpath-ppvkt, resource: bindings, ignored listing per whitelist Dec 17 10:59:46.418: INFO: namespace e2e-tests-subpath-ppvkt deletion completed in 8.309256156s • [SLOW TEST:43.261 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 10:59:46.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 17 10:59:54.005: INFO: 10 pods remaining Dec 17 10:59:54.005: INFO: 8 pods has nil DeletionTimestamp Dec 17 10:59:54.005: INFO: Dec 17 10:59:54.606: INFO: 6 pods remaining Dec 17 10:59:54.606: INFO: 0 pods has nil DeletionTimestamp Dec 17 10:59:54.606: INFO: STEP: Gathering metrics W1217 10:59:55.208026 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 10:59:55.208: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 10:59:55.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gc8hw" for this suite. Dec 17 11:00:13.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:00:13.384: INFO: namespace: e2e-tests-gc-gc8hw, resource: bindings, ignored listing per whitelist Dec 17 11:00:13.449: INFO: namespace e2e-tests-gc-gc8hw deletion completed in 18.235074569s • [SLOW TEST:27.031 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:00:13.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:00:13.898: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"66adaf7b-20bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c48532), BlockOwnerDeletion:(*bool)(0xc001c48533)}} Dec 17 11:00:14.046: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"66a97130-20bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c4872a), BlockOwnerDeletion:(*bool)(0xc001c4872b)}} Dec 17 11:00:14.220: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"66aa881f-20bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c48972), BlockOwnerDeletion:(*bool)(0xc001c48973)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:00:19.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b25ps" for this suite. Dec 17 11:00:25.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:00:25.774: INFO: namespace: e2e-tests-gc-b25ps, resource: bindings, ignored listing per whitelist Dec 17 11:00:25.792: INFO: namespace e2e-tests-gc-b25ps deletion completed in 6.511220348s • [SLOW TEST:12.343 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:00:25.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 17 11:00:25.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:28.610: INFO: stderr: "" Dec 17 11:00:28.610: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 11:00:28.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:29.032: INFO: stderr: "" Dec 17 11:00:29.032: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-npcwb " Dec 17 11:00:29.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:29.180: INFO: stderr: "" Dec 17 11:00:29.180: INFO: stdout: "" Dec 17 11:00:29.180: INFO: update-demo-nautilus-27jl9 is created but not running Dec 17 11:00:34.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:34.338: INFO: stderr: "" Dec 17 11:00:34.338: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-npcwb " Dec 17 11:00:34.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:34.530: INFO: stderr: "" Dec 17 11:00:34.530: INFO: stdout: "" Dec 17 11:00:34.530: INFO: update-demo-nautilus-27jl9 is created but not running Dec 17 11:00:39.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:39.711: INFO: stderr: "" Dec 17 11:00:39.711: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-npcwb " Dec 17 11:00:39.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:39.849: INFO: stderr: "" Dec 17 11:00:39.850: INFO: stdout: "" Dec 17 11:00:39.850: INFO: update-demo-nautilus-27jl9 is created but not running Dec 17 11:00:44.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:45.095: INFO: stderr: "" Dec 17 11:00:45.095: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-npcwb " Dec 17 11:00:45.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:45.192: INFO: stderr: "" Dec 17 11:00:45.192: INFO: stdout: "true" Dec 17 11:00:45.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:45.316: INFO: stderr: "" Dec 17 11:00:45.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:00:45.316: INFO: validating pod update-demo-nautilus-27jl9 Dec 17 11:00:45.335: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:00:45.335: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:00:45.335: INFO: update-demo-nautilus-27jl9 is verified up and running Dec 17 11:00:45.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-npcwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:45.444: INFO: stderr: "" Dec 17 11:00:45.445: INFO: stdout: "true" Dec 17 11:00:45.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-npcwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:45.562: INFO: stderr: "" Dec 17 11:00:45.562: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:00:45.562: INFO: validating pod update-demo-nautilus-npcwb Dec 17 11:00:45.586: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:00:45.586: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:00:45.586: INFO: update-demo-nautilus-npcwb is verified up and running STEP: scaling down the replication controller Dec 17 11:00:45.589: INFO: scanned /root for discovery docs: Dec 17 11:00:45.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:46.913: INFO: stderr: "" Dec 17 11:00:46.913: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 11:00:46.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:47.202: INFO: stderr: "" Dec 17 11:00:47.203: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-npcwb " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 17 11:00:52.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:52.362: INFO: stderr: "" Dec 17 11:00:52.362: INFO: stdout: "update-demo-nautilus-27jl9 " Dec 17 11:00:52.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:52.547: INFO: stderr: "" Dec 17 11:00:52.547: INFO: stdout: "true" Dec 17 11:00:52.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:52.745: INFO: stderr: "" Dec 17 11:00:52.745: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:00:52.745: INFO: validating pod update-demo-nautilus-27jl9 Dec 17 11:00:52.760: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:00:52.761: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:00:52.761: INFO: update-demo-nautilus-27jl9 is verified up and running STEP: scaling up the replication controller Dec 17 11:00:52.764: INFO: scanned /root for discovery docs: Dec 17 11:00:52.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:53.996: INFO: stderr: "" Dec 17 11:00:53.996: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 11:00:53.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:54.139: INFO: stderr: "" Dec 17 11:00:54.139: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-gw9cl " Dec 17 11:00:54.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:54.298: INFO: stderr: "" Dec 17 11:00:54.298: INFO: stdout: "true" Dec 17 11:00:54.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:54.389: INFO: stderr: "" Dec 17 11:00:54.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:00:54.389: INFO: validating pod update-demo-nautilus-27jl9 Dec 17 11:00:54.400: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:00:54.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:00:54.400: INFO: update-demo-nautilus-27jl9 is verified up and running Dec 17 11:00:54.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw9cl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:54.539: INFO: stderr: "" Dec 17 11:00:54.539: INFO: stdout: "" Dec 17 11:00:54.539: INFO: update-demo-nautilus-gw9cl is created but not running Dec 17 11:00:59.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:59.729: INFO: stderr: "" Dec 17 11:00:59.729: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-gw9cl " Dec 17 11:00:59.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:00:59.870: INFO: stderr: "" Dec 17 11:00:59.870: INFO: stdout: "true" Dec 17 11:00:59.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:00.036: INFO: stderr: "" Dec 17 11:01:00.037: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:01:00.037: INFO: validating pod update-demo-nautilus-27jl9 Dec 17 11:01:00.074: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:01:00.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:01:00.074: INFO: update-demo-nautilus-27jl9 is verified up and running Dec 17 11:01:00.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw9cl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:00.322: INFO: stderr: "" Dec 17 11:01:00.322: INFO: stdout: "" Dec 17 11:01:00.322: INFO: update-demo-nautilus-gw9cl is created but not running Dec 17 11:01:05.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:05.574: INFO: stderr: "" Dec 17 11:01:05.574: INFO: stdout: "update-demo-nautilus-27jl9 update-demo-nautilus-gw9cl " Dec 17 11:01:05.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:05.736: INFO: stderr: "" Dec 17 11:01:05.736: INFO: stdout: "true" Dec 17 11:01:05.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:05.919: INFO: stderr: "" Dec 17 11:01:05.920: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:01:05.920: INFO: validating pod update-demo-nautilus-27jl9 Dec 17 11:01:05.932: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:01:05.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:01:05.933: INFO: update-demo-nautilus-27jl9 is verified up and running Dec 17 11:01:05.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw9cl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:06.066: INFO: stderr: "" Dec 17 11:01:06.066: INFO: stdout: "true" Dec 17 11:01:06.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw9cl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:06.212: INFO: stderr: "" Dec 17 11:01:06.212: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:01:06.212: INFO: validating pod update-demo-nautilus-gw9cl Dec 17 11:01:06.221: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:01:06.221: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:01:06.221: INFO: update-demo-nautilus-gw9cl is verified up and running STEP: using delete to clean up resources Dec 17 11:01:06.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:06.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:01:06.357: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 17 11:01:06.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kn7md' Dec 17 11:01:06.631: INFO: stderr: "No resources found.\n" Dec 17 11:01:06.631: INFO: stdout: "" Dec 17 11:01:06.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kn7md -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 11:01:06.917: INFO: stderr: "" Dec 17 11:01:06.917: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:01:06.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kn7md" for this suite. Dec 17 11:01:30.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:01:31.184: INFO: namespace: e2e-tests-kubectl-kn7md, resource: bindings, ignored listing per whitelist Dec 17 11:01:31.256: INFO: namespace e2e-tests-kubectl-kn7md deletion completed in 24.318654855s • [SLOW TEST:65.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:01:31.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:01:31.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-dhv8d" to be "success or failure" Dec 17 11:01:31.548: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 34.668307ms Dec 17 11:01:33.777: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263855223s Dec 17 11:01:35.788: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274825757s Dec 17 11:01:37.807: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294259612s Dec 17 11:01:39.821: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307432705s Dec 17 11:01:41.839: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.325776198s STEP: Saw pod success Dec 17 11:01:41.839: INFO: Pod "downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:01:41.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:01:41.963: INFO: Waiting for pod downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:01:41.987: INFO: Pod downwardapi-volume-950f5975-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:01:41.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dhv8d" for this suite. Dec 17 11:01:48.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:01:48.247: INFO: namespace: e2e-tests-projected-dhv8d, resource: bindings, ignored listing per whitelist Dec 17 11:01:48.257: INFO: namespace e2e-tests-projected-dhv8d deletion completed in 6.263959554s • [SLOW TEST:17.002 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:01:48.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:01:48.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-2mkrh" to be "success or failure" Dec 17 11:01:48.494: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.837862ms Dec 17 11:01:50.520: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046616303s Dec 17 11:01:52.548: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074687852s Dec 17 11:01:54.574: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101153875s Dec 17 11:01:56.594: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121005753s Dec 17 11:01:58.790: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.317054234s STEP: Saw pod success Dec 17 11:01:58.790: INFO: Pod "downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:01:58.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:02:00.057: INFO: Waiting for pod downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:02:00.068: INFO: Pod downwardapi-volume-9f218617-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:02:00.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2mkrh" for this suite. Dec 17 11:02:06.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:02:06.403: INFO: namespace: e2e-tests-projected-2mkrh, resource: bindings, ignored listing per whitelist Dec 17 11:02:06.405: INFO: namespace e2e-tests-projected-2mkrh deletion completed in 6.296861851s • [SLOW TEST:18.147 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:02:06.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 11:02:06.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tgl9c' Dec 17 11:02:06.946: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 11:02:06.946: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Dec 17 11:02:10.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tgl9c' Dec 17 11:02:11.222: INFO: stderr: "" Dec 17 11:02:11.223: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:02:11.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tgl9c" for this suite. Dec 17 11:02:17.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:02:17.506: INFO: namespace: e2e-tests-kubectl-tgl9c, resource: bindings, ignored listing per whitelist Dec 17 11:02:17.610: INFO: namespace e2e-tests-kubectl-tgl9c deletion completed in 6.363310504s • [SLOW TEST:11.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:02:17.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-n9hns Dec 17 11:02:28.333: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-n9hns STEP: checking the pod's current state and verifying that restartCount is present Dec 17 11:02:28.339: INFO: Initial restart count of pod liveness-exec is 0 Dec 17 11:03:20.970: INFO: Restart count of pod e2e-tests-container-probe-n9hns/liveness-exec is now 1 (52.630700233s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:03:21.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n9hns" for this suite. Dec 17 11:03:29.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:03:29.200: INFO: namespace: e2e-tests-container-probe-n9hns, resource: bindings, ignored listing per whitelist Dec 17 11:03:29.349: INFO: namespace e2e-tests-container-probe-n9hns deletion completed in 8.233036835s • [SLOW TEST:71.738 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:03:29.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 17 11:03:29.618: INFO: Waiting up to 5m0s for pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-2f82s" to be "success or failure" Dec 17 11:03:29.700: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 81.697347ms Dec 17 11:03:32.009: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391325707s Dec 17 11:03:34.057: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438937613s Dec 17 11:03:36.073: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45504814s Dec 17 11:03:38.185: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56699382s Dec 17 11:03:40.319: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.700704921s Dec 17 11:03:42.482: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.863742166s STEP: Saw pod success Dec 17 11:03:42.482: INFO: Pod "pod-db74aab0-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:03:42.522: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db74aab0-20bc-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 11:03:42.775: INFO: Waiting for pod pod-db74aab0-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:03:42.870: INFO: Pod pod-db74aab0-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:03:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2f82s" for this suite. Dec 17 11:03:48.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:03:48.981: INFO: namespace: e2e-tests-emptydir-2f82s, resource: bindings, ignored listing per whitelist Dec 17 11:03:49.086: INFO: namespace e2e-tests-emptydir-2f82s deletion completed in 6.183722836s • [SLOW TEST:19.737 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:03:49.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 17 11:03:49.303: INFO: Waiting up to 5m0s for pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-tx7p7" to be "success or failure" Dec 17 11:03:49.326: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.239901ms Dec 17 11:03:51.525: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222367541s Dec 17 11:03:53.544: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240889551s Dec 17 11:03:55.805: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502445552s Dec 17 11:03:57.818: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514885352s Dec 17 11:03:59.839: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.536151619s STEP: Saw pod success Dec 17 11:03:59.839: INFO: Pod "pod-e730c293-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:03:59.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e730c293-20bc-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 11:03:59.986: INFO: Waiting for pod pod-e730c293-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:03:59.992: INFO: Pod pod-e730c293-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:03:59.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tx7p7" for this suite. Dec 17 11:04:06.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:04:06.317: INFO: namespace: e2e-tests-emptydir-tx7p7, resource: bindings, ignored listing per whitelist Dec 17 11:04:06.340: INFO: namespace e2e-tests-emptydir-tx7p7 deletion completed in 6.336660977s • [SLOW TEST:17.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:04:06.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 17 11:04:06.748: INFO: Waiting up to 5m0s for pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-hs72h" to be "success or failure" Dec 17 11:04:06.891: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 142.397678ms Dec 17 11:04:08.921: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172700704s Dec 17 11:04:10.945: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196710693s Dec 17 11:04:13.765: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.016325245s Dec 17 11:04:15.777: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029223112s Dec 17 11:04:17.796: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.047577829s STEP: Saw pod success Dec 17 11:04:17.796: INFO: Pod "pod-f17c3a72-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:04:18.563: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f17c3a72-20bc-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 11:04:19.116: INFO: Waiting for pod pod-f17c3a72-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:04:19.129: INFO: Pod pod-f17c3a72-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:04:19.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hs72h" for this suite. Dec 17 11:04:25.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:04:25.458: INFO: namespace: e2e-tests-emptydir-hs72h, resource: bindings, ignored listing per whitelist Dec 17 11:04:25.472: INFO: namespace e2e-tests-emptydir-hs72h deletion completed in 6.331305581s • [SLOW TEST:19.132 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:04:25.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-fce5077c-20bc-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 11:04:25.748: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-sdw84" to be "success or failure" Dec 17 11:04:25.809: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 60.61111ms Dec 17 11:04:27.823: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074374428s Dec 17 11:04:29.867: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118620474s Dec 17 11:04:32.184: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435972305s Dec 17 11:04:34.440: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.691254597s Dec 17 11:04:36.466: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.717369997s STEP: Saw pod success Dec 17 11:04:36.466: INFO: Pod "pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:04:36.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 17 11:04:36.856: INFO: Waiting for pod pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004 to disappear Dec 17 11:04:36.924: INFO: Pod pod-projected-secrets-fce7045c-20bc-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:04:36.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdw84" for this suite. Dec 17 11:04:45.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:04:45.094: INFO: namespace: e2e-tests-projected-sdw84, resource: bindings, ignored listing per whitelist Dec 17 11:04:45.149: INFO: namespace e2e-tests-projected-sdw84 deletion completed in 8.15276234s • [SLOW TEST:19.676 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:04:45.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-0899112a-20bd-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 11:04:45.376: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-dtsbn" to be "success or failure" Dec 17 11:04:45.410: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 33.832065ms Dec 17 11:04:48.010: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634343557s Dec 17 11:04:50.202: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.826014813s Dec 17 11:04:52.220: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.843870874s Dec 17 11:04:54.255: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879025112s Dec 17 11:04:56.277: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900808067s Dec 17 11:04:58.293: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.917355349s STEP: Saw pod success Dec 17 11:04:58.293: INFO: Pod "pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:04:58.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 17 11:04:58.377: INFO: Waiting for pod pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004 to disappear Dec 17 11:04:58.410: INFO: Pod pod-projected-secrets-089a455e-20bd-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:04:58.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dtsbn" for this suite. Dec 17 11:05:04.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:05:04.708: INFO: namespace: e2e-tests-projected-dtsbn, resource: bindings, ignored listing per whitelist Dec 17 11:05:04.817: INFO: namespace e2e-tests-projected-dtsbn deletion completed in 6.395029755s • [SLOW TEST:19.668 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:05:04.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-ddmr5 I1217 11:05:05.024474 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-ddmr5, replica count: 1 I1217 11:05:06.075581 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:07.075976 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:08.076628 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:09.077248 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:10.077759 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:11.078218 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:12.078759 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:13.079607 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:05:14.080456 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 11:05:14.232: INFO: Created: latency-svc-fhgtm Dec 17 11:05:14.336: INFO: Got endpoints: latency-svc-fhgtm [154.79412ms] Dec 17 11:05:14.519: INFO: Created: latency-svc-t5hrm Dec 17 11:05:14.559: INFO: Got endpoints: latency-svc-t5hrm [222.191768ms] Dec 17 11:05:14.741: INFO: Created: latency-svc-fs96m Dec 17 11:05:14.800: INFO: Got endpoints: latency-svc-fs96m [463.441081ms] Dec 17 11:05:15.035: INFO: Created: latency-svc-85vbc Dec 17 11:05:15.053: INFO: Got endpoints: latency-svc-85vbc [716.31149ms] Dec 17 11:05:15.125: INFO: Created: latency-svc-78bmc Dec 17 11:05:15.225: INFO: Got endpoints: latency-svc-78bmc [888.024775ms] Dec 17 11:05:15.290: INFO: Created: latency-svc-tlqmt Dec 17 11:05:15.311: INFO: Got endpoints: latency-svc-tlqmt [973.373772ms] Dec 17 11:05:15.392: INFO: Created: latency-svc-sp8w7 Dec 17 11:05:15.434: INFO: Got endpoints: latency-svc-sp8w7 [1.096374498s] Dec 17 11:05:15.475: INFO: Created: latency-svc-v5tdf Dec 17 11:05:15.634: INFO: Got endpoints: latency-svc-v5tdf [1.296582266s] Dec 17 11:05:15.668: INFO: Created: latency-svc-mzkxx Dec 17 11:05:15.678: INFO: Got endpoints: latency-svc-mzkxx [1.340565557s] Dec 17 11:05:15.839: INFO: Created: latency-svc-8v52c Dec 17 11:05:15.849: INFO: Got endpoints: latency-svc-8v52c [214.752945ms] Dec 17 11:05:16.135: INFO: Created: latency-svc-brg9j Dec 17 11:05:16.154: INFO: Got endpoints: latency-svc-brg9j [1.8164888s] Dec 17 11:05:16.213: INFO: Created: latency-svc-24x9h Dec 17 11:05:16.331: INFO: Got endpoints: latency-svc-24x9h [1.993506087s] Dec 17 11:05:16.343: INFO: Created: latency-svc-4qn6v Dec 17 11:05:16.359: INFO: Got endpoints: latency-svc-4qn6v [2.020935119s] Dec 17 11:05:16.417: INFO: Created: latency-svc-nzhf2 Dec 17 11:05:16.665: INFO: Got endpoints: latency-svc-nzhf2 [2.327438657s] Dec 17 11:05:16.695: INFO: Created: latency-svc-57hpz Dec 17 11:05:16.702: INFO: Got endpoints: latency-svc-57hpz [2.364884735s] Dec 17 11:05:16.755: INFO: Created: latency-svc-57pz9 Dec 17 11:05:16.863: INFO: Got endpoints: latency-svc-57pz9 [2.525577959s] Dec 17 11:05:16.952: INFO: Created: latency-svc-bxzcw Dec 17 11:05:17.124: INFO: Got endpoints: latency-svc-bxzcw [2.78706994s] Dec 17 11:05:17.147: INFO: Created: latency-svc-w2sl5 Dec 17 11:05:17.185: INFO: Got endpoints: latency-svc-w2sl5 [2.626341364s] Dec 17 11:05:17.539: INFO: Created: latency-svc-tg84v Dec 17 11:05:17.589: INFO: Got endpoints: latency-svc-tg84v [2.788647649s] Dec 17 11:05:17.779: INFO: Created: latency-svc-4l2xm Dec 17 11:05:17.806: INFO: Got endpoints: latency-svc-4l2xm [2.752631161s] Dec 17 11:05:18.126: INFO: Created: latency-svc-m9hbn Dec 17 11:05:18.157: INFO: Got endpoints: latency-svc-m9hbn [2.931535665s] Dec 17 11:05:18.474: INFO: Created: latency-svc-76wc2 Dec 17 11:05:18.529: INFO: Got endpoints: latency-svc-76wc2 [3.21770994s] Dec 17 11:05:18.789: INFO: Created: latency-svc-dbrps Dec 17 11:05:19.016: INFO: Got endpoints: latency-svc-dbrps [3.582381504s] Dec 17 11:05:19.072: INFO: Created: latency-svc-xhxmh Dec 17 11:05:19.214: INFO: Created: latency-svc-ncl9w Dec 17 11:05:19.244: INFO: Got endpoints: latency-svc-xhxmh [3.565720012s] Dec 17 11:05:19.265: INFO: Got endpoints: latency-svc-ncl9w [3.415573359s] Dec 17 11:05:19.391: INFO: Created: latency-svc-jlzm8 Dec 17 11:05:19.404: INFO: Got endpoints: latency-svc-jlzm8 [3.250119373s] Dec 17 11:05:19.472: INFO: Created: latency-svc-tt5pf Dec 17 11:05:19.567: INFO: Got endpoints: latency-svc-tt5pf [3.236223901s] Dec 17 11:05:19.619: INFO: Created: latency-svc-dmgtm Dec 17 11:05:19.635: INFO: Got endpoints: latency-svc-dmgtm [3.275948854s] Dec 17 11:05:19.761: INFO: Created: latency-svc-jj89h Dec 17 11:05:19.849: INFO: Got endpoints: latency-svc-jj89h [3.183618573s] Dec 17 11:05:20.086: INFO: Created: latency-svc-vt6bz Dec 17 11:05:20.103: INFO: Got endpoints: latency-svc-vt6bz [3.400845348s] Dec 17 11:05:20.286: INFO: Created: latency-svc-q565t Dec 17 11:05:20.316: INFO: Got endpoints: latency-svc-q565t [3.453148173s] Dec 17 11:05:20.357: INFO: Created: latency-svc-nplz7 Dec 17 11:05:20.468: INFO: Got endpoints: latency-svc-nplz7 [3.343437506s] Dec 17 11:05:20.502: INFO: Created: latency-svc-f2vrc Dec 17 11:05:20.524: INFO: Got endpoints: latency-svc-f2vrc [3.338660384s] Dec 17 11:05:20.740: INFO: Created: latency-svc-nrqlq Dec 17 11:05:20.774: INFO: Got endpoints: latency-svc-nrqlq [3.184990433s] Dec 17 11:05:21.063: INFO: Created: latency-svc-njz9g Dec 17 11:05:21.275: INFO: Got endpoints: latency-svc-njz9g [3.469249946s] Dec 17 11:05:21.294: INFO: Created: latency-svc-mbrtx Dec 17 11:05:21.301: INFO: Got endpoints: latency-svc-mbrtx [3.143083056s] Dec 17 11:05:21.474: INFO: Created: latency-svc-lp64j Dec 17 11:05:21.503: INFO: Got endpoints: latency-svc-lp64j [2.973402594s] Dec 17 11:05:21.561: INFO: Created: latency-svc-85hwf Dec 17 11:05:21.568: INFO: Got endpoints: latency-svc-85hwf [2.551818803s] Dec 17 11:05:21.687: INFO: Created: latency-svc-8qf6b Dec 17 11:05:21.698: INFO: Got endpoints: latency-svc-8qf6b [2.453980396s] Dec 17 11:05:21.847: INFO: Created: latency-svc-sgbwh Dec 17 11:05:21.879: INFO: Got endpoints: latency-svc-sgbwh [2.614235929s] Dec 17 11:05:22.045: INFO: Created: latency-svc-2m6wp Dec 17 11:05:22.080: INFO: Got endpoints: latency-svc-2m6wp [2.675060481s] Dec 17 11:05:22.295: INFO: Created: latency-svc-79lt6 Dec 17 11:05:22.302: INFO: Got endpoints: latency-svc-79lt6 [2.734209267s] Dec 17 11:05:22.782: INFO: Created: latency-svc-j9v7n Dec 17 11:05:22.814: INFO: Got endpoints: latency-svc-j9v7n [3.179447239s] Dec 17 11:05:23.297: INFO: Created: latency-svc-c47wt Dec 17 11:05:23.312: INFO: Got endpoints: latency-svc-c47wt [3.462254014s] Dec 17 11:05:23.527: INFO: Created: latency-svc-2k6z6 Dec 17 11:05:23.607: INFO: Got endpoints: latency-svc-2k6z6 [3.504107252s] Dec 17 11:05:24.367: INFO: Created: latency-svc-x2wdk Dec 17 11:05:24.532: INFO: Created: latency-svc-f7vbw Dec 17 11:05:24.691: INFO: Got endpoints: latency-svc-x2wdk [4.374261579s] Dec 17 11:05:24.716: INFO: Got endpoints: latency-svc-f7vbw [4.247366971s] Dec 17 11:05:24.728: INFO: Created: latency-svc-bl659 Dec 17 11:05:24.728: INFO: Got endpoints: latency-svc-bl659 [4.203526156s] Dec 17 11:05:24.768: INFO: Created: latency-svc-bmlpf Dec 17 11:05:24.865: INFO: Got endpoints: latency-svc-bmlpf [4.090172117s] Dec 17 11:05:24.950: INFO: Created: latency-svc-nx7xt Dec 17 11:05:25.085: INFO: Got endpoints: latency-svc-nx7xt [3.809565909s] Dec 17 11:05:25.133: INFO: Created: latency-svc-95p9r Dec 17 11:05:25.146: INFO: Got endpoints: latency-svc-95p9r [3.844932465s] Dec 17 11:05:25.249: INFO: Created: latency-svc-t57nn Dec 17 11:05:25.324: INFO: Got endpoints: latency-svc-t57nn [3.820699949s] Dec 17 11:05:25.491: INFO: Created: latency-svc-ptmm5 Dec 17 11:05:25.553: INFO: Got endpoints: latency-svc-ptmm5 [3.983975841s] Dec 17 11:05:25.748: INFO: Created: latency-svc-5tqjp Dec 17 11:05:25.772: INFO: Got endpoints: latency-svc-5tqjp [4.073095702s] Dec 17 11:05:25.843: INFO: Created: latency-svc-cb4jk Dec 17 11:05:26.032: INFO: Got endpoints: latency-svc-cb4jk [4.152803234s] Dec 17 11:05:26.127: INFO: Created: latency-svc-bsffx Dec 17 11:05:26.135: INFO: Got endpoints: latency-svc-bsffx [4.055125853s] Dec 17 11:05:26.317: INFO: Created: latency-svc-xvx7r Dec 17 11:05:26.346: INFO: Got endpoints: latency-svc-xvx7r [4.04377383s] Dec 17 11:05:27.301: INFO: Created: latency-svc-pwkb2 Dec 17 11:05:27.332: INFO: Got endpoints: latency-svc-pwkb2 [4.517709311s] Dec 17 11:05:27.612: INFO: Created: latency-svc-zvfht Dec 17 11:05:27.650: INFO: Got endpoints: latency-svc-zvfht [4.337831461s] Dec 17 11:05:28.181: INFO: Created: latency-svc-9lclf Dec 17 11:05:28.190: INFO: Got endpoints: latency-svc-9lclf [4.582165884s] Dec 17 11:05:29.243: INFO: Created: latency-svc-tqmh6 Dec 17 11:05:29.279: INFO: Got endpoints: latency-svc-tqmh6 [4.588153825s] Dec 17 11:05:29.439: INFO: Created: latency-svc-7qcmx Dec 17 11:05:29.465: INFO: Got endpoints: latency-svc-7qcmx [4.749306842s] Dec 17 11:05:29.762: INFO: Created: latency-svc-s6pvd Dec 17 11:05:29.799: INFO: Got endpoints: latency-svc-s6pvd [5.071643972s] Dec 17 11:05:29.882: INFO: Created: latency-svc-zddvj Dec 17 11:05:29.986: INFO: Got endpoints: latency-svc-zddvj [5.120795078s] Dec 17 11:05:30.008: INFO: Created: latency-svc-nwtms Dec 17 11:05:30.086: INFO: Got endpoints: latency-svc-nwtms [5.000712215s] Dec 17 11:05:30.165: INFO: Created: latency-svc-rhkzz Dec 17 11:05:30.183: INFO: Got endpoints: latency-svc-rhkzz [5.036734528s] Dec 17 11:05:30.272: INFO: Created: latency-svc-9h7w4 Dec 17 11:05:30.388: INFO: Got endpoints: latency-svc-9h7w4 [5.063938304s] Dec 17 11:05:30.404: INFO: Created: latency-svc-kl6q6 Dec 17 11:05:30.429: INFO: Got endpoints: latency-svc-kl6q6 [4.875821386s] Dec 17 11:05:30.605: INFO: Created: latency-svc-p82bn Dec 17 11:05:30.617: INFO: Got endpoints: latency-svc-p82bn [4.84561127s] Dec 17 11:05:30.662: INFO: Created: latency-svc-ksvl4 Dec 17 11:05:30.670: INFO: Got endpoints: latency-svc-ksvl4 [4.637441746s] Dec 17 11:05:30.784: INFO: Created: latency-svc-lkpbl Dec 17 11:05:30.805: INFO: Got endpoints: latency-svc-lkpbl [4.669781489s] Dec 17 11:05:30.983: INFO: Created: latency-svc-c5rnk Dec 17 11:05:30.985: INFO: Got endpoints: latency-svc-c5rnk [4.63944427s] Dec 17 11:05:31.019: INFO: Created: latency-svc-c9xs8 Dec 17 11:05:31.031: INFO: Got endpoints: latency-svc-c9xs8 [3.699011747s] Dec 17 11:05:31.222: INFO: Created: latency-svc-bvwdq Dec 17 11:05:31.251: INFO: Got endpoints: latency-svc-bvwdq [3.601238809s] Dec 17 11:05:31.403: INFO: Created: latency-svc-hp9ck Dec 17 11:05:31.403: INFO: Got endpoints: latency-svc-hp9ck [3.212523254s] Dec 17 11:05:31.474: INFO: Created: latency-svc-lrrgm Dec 17 11:05:31.572: INFO: Got endpoints: latency-svc-lrrgm [2.293024895s] Dec 17 11:05:31.601: INFO: Created: latency-svc-dxvzf Dec 17 11:05:31.618: INFO: Got endpoints: latency-svc-dxvzf [2.152118277s] Dec 17 11:05:31.656: INFO: Created: latency-svc-d6jcs Dec 17 11:05:31.770: INFO: Got endpoints: latency-svc-d6jcs [1.96990087s] Dec 17 11:05:31.819: INFO: Created: latency-svc-lzphz Dec 17 11:05:31.852: INFO: Got endpoints: latency-svc-lzphz [1.865414248s] Dec 17 11:05:32.008: INFO: Created: latency-svc-xmw8x Dec 17 11:05:32.034: INFO: Got endpoints: latency-svc-xmw8x [1.948192613s] Dec 17 11:05:32.095: INFO: Created: latency-svc-4ls7l Dec 17 11:05:32.220: INFO: Got endpoints: latency-svc-4ls7l [2.036937767s] Dec 17 11:05:32.303: INFO: Created: latency-svc-5rxff Dec 17 11:05:32.429: INFO: Got endpoints: latency-svc-5rxff [2.041299324s] Dec 17 11:05:32.461: INFO: Created: latency-svc-ndzj9 Dec 17 11:05:32.496: INFO: Got endpoints: latency-svc-ndzj9 [2.067496354s] Dec 17 11:05:32.734: INFO: Created: latency-svc-g2hjg Dec 17 11:05:32.889: INFO: Got endpoints: latency-svc-g2hjg [2.271882292s] Dec 17 11:05:32.946: INFO: Created: latency-svc-5vszp Dec 17 11:05:32.974: INFO: Got endpoints: latency-svc-5vszp [2.303677041s] Dec 17 11:05:33.134: INFO: Created: latency-svc-rshcv Dec 17 11:05:33.176: INFO: Got endpoints: latency-svc-rshcv [2.37063029s] Dec 17 11:05:33.334: INFO: Created: latency-svc-d9wjq Dec 17 11:05:33.357: INFO: Got endpoints: latency-svc-d9wjq [2.371063343s] Dec 17 11:05:33.418: INFO: Created: latency-svc-44mdv Dec 17 11:05:33.487: INFO: Got endpoints: latency-svc-44mdv [2.454916439s] Dec 17 11:05:33.502: INFO: Created: latency-svc-xnrmf Dec 17 11:05:33.528: INFO: Got endpoints: latency-svc-xnrmf [2.276591718s] Dec 17 11:05:33.580: INFO: Created: latency-svc-2rpl6 Dec 17 11:05:33.694: INFO: Got endpoints: latency-svc-2rpl6 [2.291648213s] Dec 17 11:05:33.732: INFO: Created: latency-svc-rj2fk Dec 17 11:05:33.738: INFO: Got endpoints: latency-svc-rj2fk [2.165305744s] Dec 17 11:05:33.919: INFO: Created: latency-svc-65269 Dec 17 11:05:33.936: INFO: Got endpoints: latency-svc-65269 [2.31844148s] Dec 17 11:05:34.177: INFO: Created: latency-svc-gpzqx Dec 17 11:05:34.322: INFO: Got endpoints: latency-svc-gpzqx [2.551835726s] Dec 17 11:05:34.340: INFO: Created: latency-svc-pttk2 Dec 17 11:05:34.365: INFO: Got endpoints: latency-svc-pttk2 [2.51298061s] Dec 17 11:05:34.417: INFO: Created: latency-svc-cb5dd Dec 17 11:05:34.583: INFO: Got endpoints: latency-svc-cb5dd [2.548852433s] Dec 17 11:05:34.625: INFO: Created: latency-svc-xbgtl Dec 17 11:05:34.790: INFO: Got endpoints: latency-svc-xbgtl [2.570241635s] Dec 17 11:05:34.846: INFO: Created: latency-svc-2qcxl Dec 17 11:05:35.009: INFO: Created: latency-svc-7d9kb Dec 17 11:05:35.018: INFO: Got endpoints: latency-svc-2qcxl [2.588350046s] Dec 17 11:05:35.051: INFO: Got endpoints: latency-svc-7d9kb [2.554617102s] Dec 17 11:05:35.159: INFO: Created: latency-svc-xz8sp Dec 17 11:05:35.167: INFO: Got endpoints: latency-svc-xz8sp [2.276353575s] Dec 17 11:05:35.245: INFO: Created: latency-svc-d5fnn Dec 17 11:05:35.355: INFO: Got endpoints: latency-svc-d5fnn [2.380649496s] Dec 17 11:05:35.386: INFO: Created: latency-svc-nmqj8 Dec 17 11:05:35.387: INFO: Got endpoints: latency-svc-nmqj8 [2.211033554s] Dec 17 11:05:35.455: INFO: Created: latency-svc-874vt Dec 17 11:05:35.509: INFO: Got endpoints: latency-svc-874vt [2.152160836s] Dec 17 11:05:35.582: INFO: Created: latency-svc-4fdpp Dec 17 11:05:35.594: INFO: Got endpoints: latency-svc-4fdpp [2.107301755s] Dec 17 11:05:35.705: INFO: Created: latency-svc-nxgkd Dec 17 11:05:35.729: INFO: Got endpoints: latency-svc-nxgkd [2.201248279s] Dec 17 11:05:35.759: INFO: Created: latency-svc-fgtdj Dec 17 11:05:35.958: INFO: Got endpoints: latency-svc-fgtdj [2.263477506s] Dec 17 11:05:35.989: INFO: Created: latency-svc-ch2lz Dec 17 11:05:35.994: INFO: Got endpoints: latency-svc-ch2lz [2.255688963s] Dec 17 11:05:36.057: INFO: Created: latency-svc-rkf4h Dec 17 11:05:36.198: INFO: Got endpoints: latency-svc-rkf4h [2.261670467s] Dec 17 11:05:36.227: INFO: Created: latency-svc-mjdp5 Dec 17 11:05:36.274: INFO: Got endpoints: latency-svc-mjdp5 [1.951331553s] Dec 17 11:05:36.279: INFO: Created: latency-svc-h8tx8 Dec 17 11:05:36.376: INFO: Got endpoints: latency-svc-h8tx8 [2.010530108s] Dec 17 11:05:36.391: INFO: Created: latency-svc-v7bv6 Dec 17 11:05:36.417: INFO: Got endpoints: latency-svc-v7bv6 [1.833437486s] Dec 17 11:05:36.566: INFO: Created: latency-svc-s4pvk Dec 17 11:05:36.602: INFO: Got endpoints: latency-svc-s4pvk [1.811670957s] Dec 17 11:05:36.657: INFO: Created: latency-svc-p5bfq Dec 17 11:05:36.732: INFO: Got endpoints: latency-svc-p5bfq [1.714002616s] Dec 17 11:05:36.770: INFO: Created: latency-svc-5fltt Dec 17 11:05:36.807: INFO: Got endpoints: latency-svc-5fltt [1.7556224s] Dec 17 11:05:37.048: INFO: Created: latency-svc-fgl6f Dec 17 11:05:37.220: INFO: Got endpoints: latency-svc-fgl6f [2.053314717s] Dec 17 11:05:37.232: INFO: Created: latency-svc-h82xt Dec 17 11:05:37.243: INFO: Got endpoints: latency-svc-h82xt [1.887832794s] Dec 17 11:05:37.305: INFO: Created: latency-svc-5n9d6 Dec 17 11:05:37.456: INFO: Got endpoints: latency-svc-5n9d6 [2.069063162s] Dec 17 11:05:37.470: INFO: Created: latency-svc-l6fzn Dec 17 11:05:37.485: INFO: Got endpoints: latency-svc-l6fzn [1.976336349s] Dec 17 11:05:37.545: INFO: Created: latency-svc-smk8d Dec 17 11:05:37.625: INFO: Got endpoints: latency-svc-smk8d [2.030499117s] Dec 17 11:05:37.650: INFO: Created: latency-svc-lk8f8 Dec 17 11:05:37.701: INFO: Got endpoints: latency-svc-lk8f8 [1.971775981s] Dec 17 11:05:37.707: INFO: Created: latency-svc-87577 Dec 17 11:05:37.879: INFO: Got endpoints: latency-svc-87577 [1.920724705s] Dec 17 11:05:37.907: INFO: Created: latency-svc-d8j9h Dec 17 11:05:37.956: INFO: Got endpoints: latency-svc-d8j9h [1.961608397s] Dec 17 11:05:38.582: INFO: Created: latency-svc-5s8pn Dec 17 11:05:38.601: INFO: Got endpoints: latency-svc-5s8pn [2.402482299s] Dec 17 11:05:38.766: INFO: Created: latency-svc-cfvvf Dec 17 11:05:38.796: INFO: Got endpoints: latency-svc-cfvvf [2.522059261s] Dec 17 11:05:38.996: INFO: Created: latency-svc-bkcrr Dec 17 11:05:39.035: INFO: Got endpoints: latency-svc-bkcrr [2.658813254s] Dec 17 11:05:39.339: INFO: Created: latency-svc-2zpnd Dec 17 11:05:39.351: INFO: Got endpoints: latency-svc-2zpnd [2.933254452s] Dec 17 11:05:39.512: INFO: Created: latency-svc-rgm6b Dec 17 11:05:39.557: INFO: Got endpoints: latency-svc-rgm6b [2.954471211s] Dec 17 11:05:39.599: INFO: Created: latency-svc-hvpqp Dec 17 11:05:39.683: INFO: Got endpoints: latency-svc-hvpqp [2.950149839s] Dec 17 11:05:39.762: INFO: Created: latency-svc-lfxf8 Dec 17 11:05:39.917: INFO: Got endpoints: latency-svc-lfxf8 [3.109455504s] Dec 17 11:05:39.957: INFO: Created: latency-svc-5hlw2 Dec 17 11:05:39.977: INFO: Got endpoints: latency-svc-5hlw2 [2.757101406s] Dec 17 11:05:40.146: INFO: Created: latency-svc-d2f7z Dec 17 11:05:40.178: INFO: Got endpoints: latency-svc-d2f7z [2.93481039s] Dec 17 11:05:40.348: INFO: Created: latency-svc-wlb5p Dec 17 11:05:40.377: INFO: Got endpoints: latency-svc-wlb5p [2.920525536s] Dec 17 11:05:40.444: INFO: Created: latency-svc-8cvvh Dec 17 11:05:40.574: INFO: Got endpoints: latency-svc-8cvvh [3.088907601s] Dec 17 11:05:40.597: INFO: Created: latency-svc-hxqnv Dec 17 11:05:40.624: INFO: Got endpoints: latency-svc-hxqnv [2.999469287s] Dec 17 11:05:40.788: INFO: Created: latency-svc-chzgm Dec 17 11:05:40.822: INFO: Got endpoints: latency-svc-chzgm [3.12030776s] Dec 17 11:05:40.882: INFO: Created: latency-svc-8rz86 Dec 17 11:05:40.991: INFO: Got endpoints: latency-svc-8rz86 [3.110745897s] Dec 17 11:05:41.222: INFO: Created: latency-svc-jpn65 Dec 17 11:05:41.261: INFO: Got endpoints: latency-svc-jpn65 [3.304477017s] Dec 17 11:05:41.429: INFO: Created: latency-svc-bdxrf Dec 17 11:05:41.526: INFO: Got endpoints: latency-svc-bdxrf [2.925380226s] Dec 17 11:05:42.417: INFO: Created: latency-svc-67flb Dec 17 11:05:42.438: INFO: Got endpoints: latency-svc-67flb [3.641283441s] Dec 17 11:05:43.100: INFO: Created: latency-svc-ppj2f Dec 17 11:05:43.108: INFO: Got endpoints: latency-svc-ppj2f [4.073218303s] Dec 17 11:05:43.261: INFO: Created: latency-svc-7rl9j Dec 17 11:05:43.287: INFO: Got endpoints: latency-svc-7rl9j [3.936784159s] Dec 17 11:05:43.488: INFO: Created: latency-svc-xc6hf Dec 17 11:05:43.500: INFO: Got endpoints: latency-svc-xc6hf [3.94221915s] Dec 17 11:05:43.647: INFO: Created: latency-svc-gzc99 Dec 17 11:05:43.724: INFO: Got endpoints: latency-svc-gzc99 [4.040789159s] Dec 17 11:05:44.007: INFO: Created: latency-svc-gwpzg Dec 17 11:05:44.029: INFO: Got endpoints: latency-svc-gwpzg [4.111695107s] Dec 17 11:05:44.157: INFO: Created: latency-svc-87x4w Dec 17 11:05:44.190: INFO: Got endpoints: latency-svc-87x4w [4.212417671s] Dec 17 11:05:44.346: INFO: Created: latency-svc-8txd5 Dec 17 11:05:44.375: INFO: Got endpoints: latency-svc-8txd5 [4.197056951s] Dec 17 11:05:44.570: INFO: Created: latency-svc-7vzjk Dec 17 11:05:44.718: INFO: Got endpoints: latency-svc-7vzjk [4.341001796s] Dec 17 11:05:44.736: INFO: Created: latency-svc-z5sph Dec 17 11:05:44.785: INFO: Got endpoints: latency-svc-z5sph [4.210453982s] Dec 17 11:05:44.932: INFO: Created: latency-svc-7rjgv Dec 17 11:05:44.954: INFO: Got endpoints: latency-svc-7rjgv [4.329531797s] Dec 17 11:05:45.138: INFO: Created: latency-svc-xbfbl Dec 17 11:05:45.167: INFO: Got endpoints: latency-svc-xbfbl [4.344430902s] Dec 17 11:05:45.300: INFO: Created: latency-svc-jnxhd Dec 17 11:05:45.315: INFO: Got endpoints: latency-svc-jnxhd [4.324422525s] Dec 17 11:05:45.451: INFO: Created: latency-svc-xcw52 Dec 17 11:05:45.459: INFO: Got endpoints: latency-svc-xcw52 [4.198306548s] Dec 17 11:05:45.619: INFO: Created: latency-svc-6ckmb Dec 17 11:05:45.637: INFO: Got endpoints: latency-svc-6ckmb [4.110767653s] Dec 17 11:05:45.773: INFO: Created: latency-svc-bj2hc Dec 17 11:05:45.821: INFO: Got endpoints: latency-svc-bj2hc [3.382694037s] Dec 17 11:05:46.011: INFO: Created: latency-svc-l2d9l Dec 17 11:05:46.039: INFO: Got endpoints: latency-svc-l2d9l [2.930780149s] Dec 17 11:05:46.192: INFO: Created: latency-svc-8x5mr Dec 17 11:05:46.227: INFO: Got endpoints: latency-svc-8x5mr [2.939415384s] Dec 17 11:05:46.390: INFO: Created: latency-svc-szdzg Dec 17 11:05:46.411: INFO: Got endpoints: latency-svc-szdzg [2.910362504s] Dec 17 11:05:46.579: INFO: Created: latency-svc-kt57q Dec 17 11:05:46.585: INFO: Got endpoints: latency-svc-kt57q [2.861241589s] Dec 17 11:05:46.625: INFO: Created: latency-svc-hthm8 Dec 17 11:05:46.724: INFO: Got endpoints: latency-svc-hthm8 [2.694948925s] Dec 17 11:05:46.777: INFO: Created: latency-svc-ssxsk Dec 17 11:05:46.796: INFO: Got endpoints: latency-svc-ssxsk [2.605785597s] Dec 17 11:05:46.964: INFO: Created: latency-svc-727kz Dec 17 11:05:47.062: INFO: Got endpoints: latency-svc-727kz [2.686418301s] Dec 17 11:05:47.152: INFO: Created: latency-svc-rng2f Dec 17 11:05:47.357: INFO: Got endpoints: latency-svc-rng2f [2.63828175s] Dec 17 11:05:47.376: INFO: Created: latency-svc-r5lg5 Dec 17 11:05:47.388: INFO: Got endpoints: latency-svc-r5lg5 [2.602595735s] Dec 17 11:05:47.603: INFO: Created: latency-svc-w5kgw Dec 17 11:05:47.611: INFO: Got endpoints: latency-svc-w5kgw [2.656441229s] Dec 17 11:05:47.645: INFO: Created: latency-svc-g4s2k Dec 17 11:05:47.892: INFO: Got endpoints: latency-svc-g4s2k [2.725453394s] Dec 17 11:05:48.262: INFO: Created: latency-svc-v2s2z Dec 17 11:05:48.300: INFO: Got endpoints: latency-svc-v2s2z [2.984250665s] Dec 17 11:05:48.607: INFO: Created: latency-svc-lpfwv Dec 17 11:05:48.794: INFO: Got endpoints: latency-svc-lpfwv [3.335102943s] Dec 17 11:05:48.820: INFO: Created: latency-svc-g7kft Dec 17 11:05:49.061: INFO: Got endpoints: latency-svc-g7kft [3.423026114s] Dec 17 11:05:49.080: INFO: Created: latency-svc-4p7ll Dec 17 11:05:49.091: INFO: Got endpoints: latency-svc-4p7ll [3.269640708s] Dec 17 11:05:49.154: INFO: Created: latency-svc-wv692 Dec 17 11:05:49.343: INFO: Got endpoints: latency-svc-wv692 [3.303323546s] Dec 17 11:05:49.400: INFO: Created: latency-svc-8tq29 Dec 17 11:05:49.425: INFO: Got endpoints: latency-svc-8tq29 [3.198038117s] Dec 17 11:05:49.586: INFO: Created: latency-svc-7sbnc Dec 17 11:05:49.611: INFO: Got endpoints: latency-svc-7sbnc [3.200406928s] Dec 17 11:05:49.661: INFO: Created: latency-svc-mpljm Dec 17 11:05:49.737: INFO: Got endpoints: latency-svc-mpljm [3.151933624s] Dec 17 11:05:49.758: INFO: Created: latency-svc-k2mft Dec 17 11:05:49.777: INFO: Got endpoints: latency-svc-k2mft [3.052194993s] Dec 17 11:05:49.838: INFO: Created: latency-svc-n5v62 Dec 17 11:05:50.004: INFO: Got endpoints: latency-svc-n5v62 [3.20765972s] Dec 17 11:05:50.051: INFO: Created: latency-svc-s5hgt Dec 17 11:05:50.143: INFO: Created: latency-svc-gl5f6 Dec 17 11:05:50.150: INFO: Got endpoints: latency-svc-s5hgt [3.087395993s] Dec 17 11:05:50.190: INFO: Got endpoints: latency-svc-gl5f6 [2.833580557s] Dec 17 11:05:50.222: INFO: Created: latency-svc-ws2dl Dec 17 11:05:50.242: INFO: Got endpoints: latency-svc-ws2dl [2.854356965s] Dec 17 11:05:50.412: INFO: Created: latency-svc-qgzp9 Dec 17 11:05:50.436: INFO: Got endpoints: latency-svc-qgzp9 [2.824800395s] Dec 17 11:05:50.645: INFO: Created: latency-svc-qq762 Dec 17 11:05:50.714: INFO: Got endpoints: latency-svc-qq762 [2.821546973s] Dec 17 11:05:50.730: INFO: Created: latency-svc-jkzsg Dec 17 11:05:50.855: INFO: Got endpoints: latency-svc-jkzsg [2.55566502s] Dec 17 11:05:50.969: INFO: Created: latency-svc-qm9jj Dec 17 11:05:51.079: INFO: Got endpoints: latency-svc-qm9jj [2.283634383s] Dec 17 11:05:51.113: INFO: Created: latency-svc-brk65 Dec 17 11:05:51.129: INFO: Got endpoints: latency-svc-brk65 [2.068566199s] Dec 17 11:05:51.275: INFO: Created: latency-svc-h8t9r Dec 17 11:05:51.278: INFO: Got endpoints: latency-svc-h8t9r [2.187744493s] Dec 17 11:05:51.359: INFO: Created: latency-svc-bngz6 Dec 17 11:05:51.501: INFO: Got endpoints: latency-svc-bngz6 [2.158142016s] Dec 17 11:05:51.550: INFO: Created: latency-svc-jmkvp Dec 17 11:05:51.551: INFO: Got endpoints: latency-svc-jmkvp [2.125121975s] Dec 17 11:05:51.584: INFO: Created: latency-svc-hhjbj Dec 17 11:05:51.698: INFO: Got endpoints: latency-svc-hhjbj [2.087003141s] Dec 17 11:05:51.747: INFO: Created: latency-svc-9gdjb Dec 17 11:05:51.749: INFO: Got endpoints: latency-svc-9gdjb [2.01112452s] Dec 17 11:05:51.941: INFO: Created: latency-svc-bxcbc Dec 17 11:05:51.946: INFO: Got endpoints: latency-svc-bxcbc [2.169101422s] Dec 17 11:05:52.010: INFO: Created: latency-svc-6rnbc Dec 17 11:05:52.147: INFO: Got endpoints: latency-svc-6rnbc [2.142699203s] Dec 17 11:05:52.166: INFO: Created: latency-svc-vn9xv Dec 17 11:05:52.166: INFO: Got endpoints: latency-svc-vn9xv [2.016273169s] Dec 17 11:05:52.232: INFO: Created: latency-svc-wjf28 Dec 17 11:05:52.355: INFO: Got endpoints: latency-svc-wjf28 [2.163962824s] Dec 17 11:05:52.399: INFO: Created: latency-svc-6lhg8 Dec 17 11:05:52.401: INFO: Got endpoints: latency-svc-6lhg8 [2.157790352s] Dec 17 11:05:52.449: INFO: Created: latency-svc-vf42z Dec 17 11:05:52.543: INFO: Got endpoints: latency-svc-vf42z [2.107289796s] Dec 17 11:05:52.587: INFO: Created: latency-svc-44dbr Dec 17 11:05:52.611: INFO: Got endpoints: latency-svc-44dbr [1.896822982s] Dec 17 11:05:52.805: INFO: Created: latency-svc-t9rtt Dec 17 11:05:52.830: INFO: Got endpoints: latency-svc-t9rtt [1.974488293s] Dec 17 11:05:52.983: INFO: Created: latency-svc-9grhx Dec 17 11:05:53.003: INFO: Got endpoints: latency-svc-9grhx [1.924065702s] Dec 17 11:05:53.061: INFO: Created: latency-svc-xbjn9 Dec 17 11:05:53.144: INFO: Got endpoints: latency-svc-xbjn9 [2.014543389s] Dec 17 11:05:53.169: INFO: Created: latency-svc-9fbx6 Dec 17 11:05:53.176: INFO: Got endpoints: latency-svc-9fbx6 [1.897803911s] Dec 17 11:05:53.210: INFO: Created: latency-svc-j2qcr Dec 17 11:05:53.221: INFO: Got endpoints: latency-svc-j2qcr [1.71990906s] Dec 17 11:05:53.337: INFO: Created: latency-svc-x7lvn Dec 17 11:05:53.345: INFO: Got endpoints: latency-svc-x7lvn [1.794280562s] Dec 17 11:05:53.404: INFO: Created: latency-svc-qfwft Dec 17 11:05:53.413: INFO: Got endpoints: latency-svc-qfwft [1.714474631s] Dec 17 11:05:53.413: INFO: Latencies: [214.752945ms 222.191768ms 463.441081ms 716.31149ms 888.024775ms 973.373772ms 1.096374498s 1.296582266s 1.340565557s 1.714002616s 1.714474631s 1.71990906s 1.7556224s 1.794280562s 1.811670957s 1.8164888s 1.833437486s 1.865414248s 1.887832794s 1.896822982s 1.897803911s 1.920724705s 1.924065702s 1.948192613s 1.951331553s 1.961608397s 1.96990087s 1.971775981s 1.974488293s 1.976336349s 1.993506087s 2.010530108s 2.01112452s 2.014543389s 2.016273169s 2.020935119s 2.030499117s 2.036937767s 2.041299324s 2.053314717s 2.067496354s 2.068566199s 2.069063162s 2.087003141s 2.107289796s 2.107301755s 2.125121975s 2.142699203s 2.152118277s 2.152160836s 2.157790352s 2.158142016s 2.163962824s 2.165305744s 2.169101422s 2.187744493s 2.201248279s 2.211033554s 2.255688963s 2.261670467s 2.263477506s 2.271882292s 2.276353575s 2.276591718s 2.283634383s 2.291648213s 2.293024895s 2.303677041s 2.31844148s 2.327438657s 2.364884735s 2.37063029s 2.371063343s 2.380649496s 2.402482299s 2.453980396s 2.454916439s 2.51298061s 2.522059261s 2.525577959s 2.548852433s 2.551818803s 2.551835726s 2.554617102s 2.55566502s 2.570241635s 2.588350046s 2.602595735s 2.605785597s 2.614235929s 2.626341364s 2.63828175s 2.656441229s 2.658813254s 2.675060481s 2.686418301s 2.694948925s 2.725453394s 2.734209267s 2.752631161s 2.757101406s 2.78706994s 2.788647649s 2.821546973s 2.824800395s 2.833580557s 2.854356965s 2.861241589s 2.910362504s 2.920525536s 2.925380226s 2.930780149s 2.931535665s 2.933254452s 2.93481039s 2.939415384s 2.950149839s 2.954471211s 2.973402594s 2.984250665s 2.999469287s 3.052194993s 3.087395993s 3.088907601s 3.109455504s 3.110745897s 3.12030776s 3.143083056s 3.151933624s 3.179447239s 3.183618573s 3.184990433s 3.198038117s 3.200406928s 3.20765972s 3.212523254s 3.21770994s 3.236223901s 3.250119373s 3.269640708s 3.275948854s 3.303323546s 3.304477017s 3.335102943s 3.338660384s 3.343437506s 3.382694037s 3.400845348s 3.415573359s 3.423026114s 3.453148173s 3.462254014s 3.469249946s 3.504107252s 3.565720012s 3.582381504s 3.601238809s 3.641283441s 3.699011747s 3.809565909s 3.820699949s 3.844932465s 3.936784159s 3.94221915s 3.983975841s 4.040789159s 4.04377383s 4.055125853s 4.073095702s 4.073218303s 4.090172117s 4.110767653s 4.111695107s 4.152803234s 4.197056951s 4.198306548s 4.203526156s 4.210453982s 4.212417671s 4.247366971s 4.324422525s 4.329531797s 4.337831461s 4.341001796s 4.344430902s 4.374261579s 4.517709311s 4.582165884s 4.588153825s 4.637441746s 4.63944427s 4.669781489s 4.749306842s 4.84561127s 4.875821386s 5.000712215s 5.036734528s 5.063938304s 5.071643972s 5.120795078s] Dec 17 11:05:53.413: INFO: 50 %ile: 2.757101406s Dec 17 11:05:53.413: INFO: 90 %ile: 4.324422525s Dec 17 11:05:53.413: INFO: 99 %ile: 5.071643972s Dec 17 11:05:53.413: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:05:53.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-ddmr5" for this suite. Dec 17 11:07:09.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:07:09.691: INFO: namespace: e2e-tests-svc-latency-ddmr5, resource: bindings, ignored listing per whitelist Dec 17 11:07:10.014: INFO: namespace e2e-tests-svc-latency-ddmr5 deletion completed in 1m16.467449842s • [SLOW TEST:125.197 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:07:10.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:07:10.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-qvb4b" to be "success or failure" Dec 17 11:07:10.216: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.209704ms Dec 17 11:07:12.525: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331424963s Dec 17 11:07:14.540: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345689615s Dec 17 11:07:17.100: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90621874s Dec 17 11:07:19.112: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918451328s Dec 17 11:07:21.134: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.940242939s Dec 17 11:07:23.156: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.96187384s STEP: Saw pod success Dec 17 11:07:23.156: INFO: Pod "downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:07:23.162: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:07:23.540: INFO: Waiting for pod downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004 to disappear Dec 17 11:07:23.563: INFO: Pod downwardapi-volume-5eeee7f6-20bd-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:07:23.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qvb4b" for this suite. Dec 17 11:07:29.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:07:29.733: INFO: namespace: e2e-tests-downward-api-qvb4b, resource: bindings, ignored listing per whitelist Dec 17 11:07:29.890: INFO: namespace e2e-tests-downward-api-qvb4b deletion completed in 6.315456715s • [SLOW TEST:19.876 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:07:29.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:07:30.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-nvpx8" to be "success or failure" Dec 17 11:07:30.266: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.250311ms Dec 17 11:07:32.372: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1257695s Dec 17 11:07:34.398: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151639066s Dec 17 11:07:36.563: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317042565s Dec 17 11:07:38.589: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343036543s Dec 17 11:07:40.624: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377600435s STEP: Saw pod success Dec 17 11:07:40.624: INFO: Pod "downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:07:40.657: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:07:41.145: INFO: Waiting for pod downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004 to disappear Dec 17 11:07:41.169: INFO: Pod downwardapi-volume-6adf5c06-20bd-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:07:41.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nvpx8" for this suite. Dec 17 11:07:47.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:07:47.948: INFO: namespace: e2e-tests-downward-api-nvpx8, resource: bindings, ignored listing per whitelist Dec 17 11:07:47.948: INFO: namespace e2e-tests-downward-api-nvpx8 deletion completed in 6.743371357s • [SLOW TEST:18.056 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:07:47.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-758bcafa-20bd-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:08:02.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dkshv" for this suite. Dec 17 11:08:20.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:08:20.413: INFO: namespace: e2e-tests-configmap-dkshv, resource: bindings, ignored listing per whitelist Dec 17 11:08:20.666: INFO: namespace e2e-tests-configmap-dkshv deletion completed in 18.323820296s • [SLOW TEST:32.717 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:08:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-890d6c84-20bd-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-890d6c84-20bd-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:10:03.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7zj5f" for this suite. Dec 17 11:10:27.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:10:27.536: INFO: namespace: e2e-tests-configmap-7zj5f, resource: bindings, ignored listing per whitelist Dec 17 11:10:27.560: INFO: namespace e2e-tests-configmap-7zj5f deletion completed in 24.250358731s • [SLOW TEST:126.894 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:10:27.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1217 11:11:13.748267 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 11:11:13.748: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:11:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-v69rf" for this suite. Dec 17 11:11:21.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:11:24.716: INFO: namespace: e2e-tests-gc-v69rf, resource: bindings, ignored listing per whitelist Dec 17 11:11:24.793: INFO: namespace e2e-tests-gc-v69rf deletion completed in 11.025631565s • [SLOW TEST:57.233 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:11:24.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-f70e21b1-20bd-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 11:11:25.729: INFO: Waiting up to 5m0s for pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-xwsrj" to be "success or failure" Dec 17 11:11:25.767: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.202185ms Dec 17 11:11:28.534: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.804867224s Dec 17 11:11:31.375: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.646035595s Dec 17 11:11:33.389: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.660237268s Dec 17 11:11:35.404: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.675293912s Dec 17 11:11:37.416: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.687006362s Dec 17 11:11:39.451: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.722406423s Dec 17 11:11:41.810: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.080787993s Dec 17 11:11:43.830: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.100798203s Dec 17 11:11:45.845: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.116259473s STEP: Saw pod success Dec 17 11:11:45.845: INFO: Pod "pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:11:45.864: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 17 11:11:46.647: INFO: Waiting for pod pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004 to disappear Dec 17 11:11:46.670: INFO: Pod pod-configmaps-f71e0d90-20bd-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:11:46.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xwsrj" for this suite. Dec 17 11:11:53.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:11:53.404: INFO: namespace: e2e-tests-configmap-xwsrj, resource: bindings, ignored listing per whitelist Dec 17 11:11:53.454: INFO: namespace e2e-tests-configmap-xwsrj deletion completed in 6.324942825s • [SLOW TEST:28.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:11:53.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1217 11:11:54.860974 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 11:11:54.861: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:11:54.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7fhgs" for this suite. Dec 17 11:12:02.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:12:02.999: INFO: namespace: e2e-tests-gc-7fhgs, resource: bindings, ignored listing per whitelist Dec 17 11:12:03.216: INFO: namespace e2e-tests-gc-7fhgs deletion completed in 8.351252236s • [SLOW TEST:9.763 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:12:03.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1217 11:12:34.266041 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 11:12:34.266: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:12:34.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-x9nwh" for this suite. Dec 17 11:12:44.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:12:45.989: INFO: namespace: e2e-tests-gc-x9nwh, resource: bindings, ignored listing per whitelist Dec 17 11:12:46.003: INFO: namespace e2e-tests-gc-x9nwh deletion completed in 11.731261687s • [SLOW TEST:42.786 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:12:46.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:12:46.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-ksfv4" to be "success or failure" Dec 17 11:12:46.396: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 103.283551ms Dec 17 11:12:48.455: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162750165s Dec 17 11:12:50.487: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193934891s Dec 17 11:12:53.088: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795660761s Dec 17 11:12:55.380: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.087128608s Dec 17 11:12:57.399: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.106211977s STEP: Saw pod success Dec 17 11:12:57.399: INFO: Pod "downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:12:57.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:12:57.633: INFO: Waiting for pod downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004 to disappear Dec 17 11:12:57.651: INFO: Pod downwardapi-volume-27407ebc-20be-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:12:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ksfv4" for this suite. Dec 17 11:13:03.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:13:04.012: INFO: namespace: e2e-tests-projected-ksfv4, resource: bindings, ignored listing per whitelist Dec 17 11:13:04.118: INFO: namespace e2e-tests-projected-ksfv4 deletion completed in 6.455430241s • [SLOW TEST:18.115 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:13:04.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:13:14.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gvp6h" for this suite. Dec 17 11:14:04.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:14:04.949: INFO: namespace: e2e-tests-kubelet-test-gvp6h, resource: bindings, ignored listing per whitelist Dec 17 11:14:04.954: INFO: namespace e2e-tests-kubelet-test-gvp6h deletion completed in 50.326309498s • [SLOW TEST:60.836 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:14:04.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:14:05.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-m8jsq" to be "success or failure" Dec 17 11:14:05.292: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 95.163682ms Dec 17 11:14:07.775: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578588997s Dec 17 11:14:09.791: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594200659s Dec 17 11:14:12.000: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803381411s Dec 17 11:14:14.044: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847671567s Dec 17 11:14:16.059: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.862177028s STEP: Saw pod success Dec 17 11:14:16.059: INFO: Pod "downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:14:16.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:14:16.725: INFO: Waiting for pod downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004 to disappear Dec 17 11:14:16.737: INFO: Pod downwardapi-volume-5648ed08-20be-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:14:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-m8jsq" for this suite. Dec 17 11:14:22.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:14:23.018: INFO: namespace: e2e-tests-downward-api-m8jsq, resource: bindings, ignored listing per whitelist Dec 17 11:14:23.091: INFO: namespace e2e-tests-downward-api-m8jsq deletion completed in 6.341334933s • [SLOW TEST:18.136 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:14:23.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-611b5b99-20be-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 11:14:23.352: INFO: Waiting up to 5m0s for pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-dpsqs" to be "success or failure" Dec 17 11:14:23.359: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360592ms Dec 17 11:14:25.545: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192432468s Dec 17 11:14:27.565: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213127833s Dec 17 11:14:30.672: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.319358019s Dec 17 11:14:32.686: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.333934358s Dec 17 11:14:34.723: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.370764108s STEP: Saw pod success Dec 17 11:14:34.723: INFO: Pod "pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:14:34.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 17 11:14:34.781: INFO: Waiting for pod pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004 to disappear Dec 17 11:14:34.821: INFO: Pod pod-configmaps-611c5a4b-20be-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:14:34.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dpsqs" for this suite. Dec 17 11:14:42.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:14:42.372: INFO: namespace: e2e-tests-configmap-dpsqs, resource: bindings, ignored listing per whitelist Dec 17 11:14:42.526: INFO: namespace e2e-tests-configmap-dpsqs deletion completed in 6.686356736s • [SLOW TEST:19.435 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:14:42.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-x5c7t in namespace e2e-tests-proxy-2j28d I1217 11:14:42.942199 8 runners.go:184] Created replication controller with name: proxy-service-x5c7t, namespace: e2e-tests-proxy-2j28d, replica count: 1 I1217 11:14:43.994312 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:44.994910 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:45.995568 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:46.996218 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:47.996678 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:48.998021 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:49.999436 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:50.999802 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:52.000119 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 11:14:53.000480 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:54.001021 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:55.002191 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:56.003030 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:57.003525 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:58.004007 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:14:59.004431 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:15:00.005144 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 11:15:01.005720 8 runners.go:184] proxy-service-x5c7t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 11:15:01.021: INFO: setup took 18.207218325s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 17 11:15:01.047: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-2j28d/pods/proxy-service-x5c7t-bs4ng/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 17 11:15:19.120: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:15:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-dsqfg" for this suite. Dec 17 11:16:07.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:16:07.611: INFO: namespace: e2e-tests-init-container-dsqfg, resource: bindings, ignored listing per whitelist Dec 17 11:16:07.622: INFO: namespace e2e-tests-init-container-dsqfg deletion completed in 26.206226049s • [SLOW TEST:48.637 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:16:07.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 17 11:16:07.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 17 11:16:09.767: INFO: stderr: "" Dec 17 11:16:09.767: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:16:09.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xqkr9" for this suite. Dec 17 11:16:15.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:16:15.963: INFO: namespace: e2e-tests-kubectl-xqkr9, resource: bindings, ignored listing per whitelist Dec 17 11:16:16.027: INFO: namespace e2e-tests-kubectl-xqkr9 deletion completed in 6.248024565s • [SLOW TEST:8.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:16:16.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-7l42f/configmap-test-a46bf9e2-20be-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 11:16:16.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-7l42f" to be "success or failure" Dec 17 11:16:16.366: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.580367ms Dec 17 11:16:18.378: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028174541s Dec 17 11:16:20.399: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049063298s Dec 17 11:16:22.486: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135951275s Dec 17 11:16:25.408: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.058007918s Dec 17 11:16:27.431: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.080967977s STEP: Saw pod success Dec 17 11:16:27.431: INFO: Pod "pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:16:27.443: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004 container env-test: STEP: delete the pod Dec 17 11:16:27.675: INFO: Waiting for pod pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004 to disappear Dec 17 11:16:27.683: INFO: Pod pod-configmaps-a46cce0c-20be-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:16:27.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7l42f" for this suite. Dec 17 11:16:33.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:16:34.201: INFO: namespace: e2e-tests-configmap-7l42f, resource: bindings, ignored listing per whitelist Dec 17 11:16:34.210: INFO: namespace e2e-tests-configmap-7l42f deletion completed in 6.51614634s • [SLOW TEST:18.183 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:16:34.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:16:34.432: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:16:35.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-b6dpg" for this suite. Dec 17 11:16:41.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:16:41.962: INFO: namespace: e2e-tests-custom-resource-definition-b6dpg, resource: bindings, ignored listing per whitelist Dec 17 11:16:42.027: INFO: namespace e2e-tests-custom-resource-definition-b6dpg deletion completed in 6.248455326s • [SLOW TEST:7.817 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:16:42.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 11:16:42.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-h759t' Dec 17 11:16:42.408: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 11:16:42.409: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 17 11:16:42.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-h759t' Dec 17 11:16:42.681: INFO: stderr: "" Dec 17 11:16:42.682: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:16:42.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h759t" for this suite. Dec 17 11:17:04.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:17:04.819: INFO: namespace: e2e-tests-kubectl-h759t, resource: bindings, ignored listing per whitelist Dec 17 11:17:04.903: INFO: namespace e2e-tests-kubectl-h759t deletion completed in 22.179221402s • [SLOW TEST:22.876 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:17:04.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 17 11:17:05.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-dg4d7,SelfLink:/api/v1/namespaces/e2e-tests-watch-dg4d7/configmaps/e2e-watch-test-watch-closed,UID:c197e592-20be-11ea-a994-fa163e34d433,ResourceVersion:15114772,Generation:0,CreationTimestamp:2019-12-17 11:17:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 17 11:17:05.223: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-dg4d7,SelfLink:/api/v1/namespaces/e2e-tests-watch-dg4d7/configmaps/e2e-watch-test-watch-closed,UID:c197e592-20be-11ea-a994-fa163e34d433,ResourceVersion:15114773,Generation:0,CreationTimestamp:2019-12-17 11:17:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 17 11:17:05.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-dg4d7,SelfLink:/api/v1/namespaces/e2e-tests-watch-dg4d7/configmaps/e2e-watch-test-watch-closed,UID:c197e592-20be-11ea-a994-fa163e34d433,ResourceVersion:15114774,Generation:0,CreationTimestamp:2019-12-17 11:17:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 17 11:17:05.257: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-dg4d7,SelfLink:/api/v1/namespaces/e2e-tests-watch-dg4d7/configmaps/e2e-watch-test-watch-closed,UID:c197e592-20be-11ea-a994-fa163e34d433,ResourceVersion:15114775,Generation:0,CreationTimestamp:2019-12-17 11:17:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:17:05.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-dg4d7" for this suite. Dec 17 11:17:11.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:17:11.583: INFO: namespace: e2e-tests-watch-dg4d7, resource: bindings, ignored listing per whitelist Dec 17 11:17:11.587: INFO: namespace e2e-tests-watch-dg4d7 deletion completed in 6.315243038s • [SLOW TEST:6.683 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:17:11.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2fk8l [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Dec 17 11:17:11.830: INFO: Found 0 stateful pods, waiting for 3 Dec 17 11:17:21.845: INFO: Found 1 stateful pods, waiting for 3 Dec 17 11:17:32.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:17:32.043: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:17:32.043: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 11:17:41.853: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:17:41.853: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:17:41.853: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 17 11:17:41.979: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 17 11:17:52.207: INFO: Updating stateful set ss2 Dec 17 11:17:52.290: INFO: Waiting for Pod e2e-tests-statefulset-2fk8l/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 17 11:18:02.315: INFO: Waiting for Pod e2e-tests-statefulset-2fk8l/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 17 11:18:12.787: INFO: Found 2 stateful pods, waiting for 3 Dec 17 11:18:22.899: INFO: Found 2 stateful pods, waiting for 3 Dec 17 11:18:32.800: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:18:32.800: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:18:32.800: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 11:18:42.807: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:18:42.807: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:18:42.807: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 17 11:18:42.844: INFO: Updating stateful set ss2 Dec 17 11:18:42.870: INFO: Waiting for Pod e2e-tests-statefulset-2fk8l/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 17 11:18:52.948: INFO: Updating stateful set ss2 Dec 17 11:18:52.972: INFO: Waiting for StatefulSet e2e-tests-statefulset-2fk8l/ss2 to complete update Dec 17 11:18:52.973: INFO: Waiting for Pod e2e-tests-statefulset-2fk8l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 17 11:19:02.998: INFO: Waiting for StatefulSet e2e-tests-statefulset-2fk8l/ss2 to complete update Dec 17 11:19:02.999: INFO: Waiting for Pod e2e-tests-statefulset-2fk8l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 17 11:19:13.017: INFO: Waiting for StatefulSet e2e-tests-statefulset-2fk8l/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 17 11:19:23.037: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2fk8l Dec 17 11:19:23.045: INFO: Scaling statefulset ss2 to 0 Dec 17 11:19:43.154: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 11:19:43.162: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:19:43.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2fk8l" for this suite. Dec 17 11:19:51.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:19:51.539: INFO: namespace: e2e-tests-statefulset-2fk8l, resource: bindings, ignored listing per whitelist Dec 17 11:19:51.573: INFO: namespace e2e-tests-statefulset-2fk8l deletion completed in 8.333845805s • [SLOW TEST:159.986 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:19:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Dec 17 11:19:51.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:19:52.446: INFO: stderr: "" Dec 17 11:19:52.446: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 11:19:52.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:19:52.685: INFO: stderr: "" Dec 17 11:19:52.685: INFO: stdout: "update-demo-nautilus-ktphr update-demo-nautilus-vqj58 " Dec 17 11:19:52.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktphr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:19:52.994: INFO: stderr: "" Dec 17 11:19:52.994: INFO: stdout: "" Dec 17 11:19:52.994: INFO: update-demo-nautilus-ktphr is created but not running Dec 17 11:19:57.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:19:58.180: INFO: stderr: "" Dec 17 11:19:58.180: INFO: stdout: "update-demo-nautilus-ktphr update-demo-nautilus-vqj58 " Dec 17 11:19:58.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktphr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:19:58.327: INFO: stderr: "" Dec 17 11:19:58.327: INFO: stdout: "" Dec 17 11:19:58.327: INFO: update-demo-nautilus-ktphr is created but not running Dec 17 11:20:03.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:03.501: INFO: stderr: "" Dec 17 11:20:03.501: INFO: stdout: "update-demo-nautilus-ktphr update-demo-nautilus-vqj58 " Dec 17 11:20:03.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktphr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:03.620: INFO: stderr: "" Dec 17 11:20:03.620: INFO: stdout: "" Dec 17 11:20:03.620: INFO: update-demo-nautilus-ktphr is created but not running Dec 17 11:20:08.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:08.804: INFO: stderr: "" Dec 17 11:20:08.804: INFO: stdout: "update-demo-nautilus-ktphr update-demo-nautilus-vqj58 " Dec 17 11:20:08.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktphr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:08.964: INFO: stderr: "" Dec 17 11:20:08.964: INFO: stdout: "true" Dec 17 11:20:08.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktphr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:09.134: INFO: stderr: "" Dec 17 11:20:09.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:20:09.135: INFO: validating pod update-demo-nautilus-ktphr Dec 17 11:20:09.178: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:20:09.178: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:20:09.178: INFO: update-demo-nautilus-ktphr is verified up and running Dec 17 11:20:09.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqj58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:09.336: INFO: stderr: "" Dec 17 11:20:09.336: INFO: stdout: "true" Dec 17 11:20:09.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqj58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:09.477: INFO: stderr: "" Dec 17 11:20:09.477: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 11:20:09.477: INFO: validating pod update-demo-nautilus-vqj58 Dec 17 11:20:09.489: INFO: got data: { "image": "nautilus.jpg" } Dec 17 11:20:09.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 11:20:09.489: INFO: update-demo-nautilus-vqj58 is verified up and running STEP: rolling-update to new replication controller Dec 17 11:20:09.491: INFO: scanned /root for discovery docs: Dec 17 11:20:09.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:45.781: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 17 11:20:45.781: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 11:20:45.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:46.071: INFO: stderr: "" Dec 17 11:20:46.072: INFO: stdout: "update-demo-kitten-4pvtq update-demo-kitten-dnc6l " Dec 17 11:20:46.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4pvtq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:46.276: INFO: stderr: "" Dec 17 11:20:46.276: INFO: stdout: "true" Dec 17 11:20:46.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4pvtq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:46.402: INFO: stderr: "" Dec 17 11:20:46.402: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 17 11:20:46.402: INFO: validating pod update-demo-kitten-4pvtq Dec 17 11:20:46.448: INFO: got data: { "image": "kitten.jpg" } Dec 17 11:20:46.448: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 17 11:20:46.448: INFO: update-demo-kitten-4pvtq is verified up and running Dec 17 11:20:46.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dnc6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:46.596: INFO: stderr: "" Dec 17 11:20:46.596: INFO: stdout: "true" Dec 17 11:20:46.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dnc6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k2tfx' Dec 17 11:20:46.755: INFO: stderr: "" Dec 17 11:20:46.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 17 11:20:46.755: INFO: validating pod update-demo-kitten-dnc6l Dec 17 11:20:46.770: INFO: got data: { "image": "kitten.jpg" } Dec 17 11:20:46.770: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 17 11:20:46.770: INFO: update-demo-kitten-dnc6l is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:20:46.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k2tfx" for this suite. Dec 17 11:21:10.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:21:10.939: INFO: namespace: e2e-tests-kubectl-k2tfx, resource: bindings, ignored listing per whitelist Dec 17 11:21:10.985: INFO: namespace e2e-tests-kubectl-k2tfx deletion completed in 24.209871795s • [SLOW TEST:79.412 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:21:10.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:21:11.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-9zxnt" to be "success or failure" Dec 17 11:21:11.287: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 133.926265ms Dec 17 11:21:13.305: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151703695s Dec 17 11:21:15.325: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17152206s Dec 17 11:21:17.371: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21783443s Dec 17 11:21:19.391: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237387507s Dec 17 11:21:21.409: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25597539s Dec 17 11:21:23.500: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.346804728s STEP: Saw pod success Dec 17 11:21:23.501: INFO: Pod "downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:21:23.563: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:21:24.031: INFO: Waiting for pod downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004 to disappear Dec 17 11:21:24.050: INFO: Pod downwardapi-volume-542e44d4-20bf-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:21:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9zxnt" for this suite. Dec 17 11:21:30.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:21:30.290: INFO: namespace: e2e-tests-projected-9zxnt, resource: bindings, ignored listing per whitelist Dec 17 11:21:30.396: INFO: namespace e2e-tests-projected-9zxnt deletion completed in 6.332687616s • [SLOW TEST:19.411 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:21:30.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5fe35e46-20bf-11ea-a5ef-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-5fe35eb2-20bf-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5fe35e46-20bf-11ea-a5ef-0242ac110004 STEP: Updating configmap cm-test-opt-upd-5fe35eb2-20bf-11ea-a5ef-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-5fe35eca-20bf-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:21:51.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fqtx4" for this suite. Dec 17 11:22:15.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:22:15.545: INFO: namespace: e2e-tests-configmap-fqtx4, resource: bindings, ignored listing per whitelist Dec 17 11:22:15.641: INFO: namespace e2e-tests-configmap-fqtx4 deletion completed in 24.249516951s • [SLOW TEST:45.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:22:15.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 17 11:22:15.977: INFO: Waiting up to 5m0s for pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-4lskf" to be "success or failure" Dec 17 11:22:15.992: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.636401ms Dec 17 11:22:18.340: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36188517s Dec 17 11:22:20.373: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395124398s Dec 17 11:22:22.681: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702679087s Dec 17 11:22:24.722: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743698594s Dec 17 11:22:26.748: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.770186819s STEP: Saw pod success Dec 17 11:22:26.748: INFO: Pod "downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:22:26.757: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004 container dapi-container: STEP: delete the pod Dec 17 11:22:27.084: INFO: Waiting for pod downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004 to disappear Dec 17 11:22:27.099: INFO: Pod downward-api-7ac52779-20bf-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:22:27.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4lskf" for this suite. Dec 17 11:22:33.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:22:33.362: INFO: namespace: e2e-tests-downward-api-4lskf, resource: bindings, ignored listing per whitelist Dec 17 11:22:33.419: INFO: namespace e2e-tests-downward-api-4lskf deletion completed in 6.31170015s • [SLOW TEST:17.777 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:22:33.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-855e77d9-20bf-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 11:22:33.692: INFO: Waiting up to 5m0s for pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-tntd2" to be "success or failure" Dec 17 11:22:33.704: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.913426ms Dec 17 11:22:35.729: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037347101s Dec 17 11:22:37.749: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057333968s Dec 17 11:22:40.217: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524696373s Dec 17 11:22:42.245: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552740276s Dec 17 11:22:44.323: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631183783s Dec 17 11:22:46.348: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.656090709s STEP: Saw pod success Dec 17 11:22:46.348: INFO: Pod "pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:22:46.373: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 17 11:22:46.480: INFO: Waiting for pod pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004 to disappear Dec 17 11:22:46.498: INFO: Pod pod-secrets-85601060-20bf-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:22:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tntd2" for this suite. Dec 17 11:22:52.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:22:52.695: INFO: namespace: e2e-tests-secrets-tntd2, resource: bindings, ignored listing per whitelist Dec 17 11:22:53.029: INFO: namespace e2e-tests-secrets-tntd2 deletion completed in 6.515711257s • [SLOW TEST:19.610 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:22:53.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Dec 17 11:22:53.332: INFO: Waiting up to 5m0s for pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-var-expansion-9zbr4" to be "success or failure" Dec 17 11:22:53.355: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.109072ms Dec 17 11:22:55.400: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067051703s Dec 17 11:22:57.442: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109877877s Dec 17 11:22:59.469: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136428689s Dec 17 11:23:01.560: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227061939s Dec 17 11:23:03.576: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243729836s STEP: Saw pod success Dec 17 11:23:03.576: INFO: Pod "var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:23:03.581: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004 container dapi-container: STEP: delete the pod Dec 17 11:23:04.084: INFO: Waiting for pod var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004 to disappear Dec 17 11:23:04.348: INFO: Pod var-expansion-910b8e03-20bf-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:23:04.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-9zbr4" for this suite. Dec 17 11:23:10.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:23:10.506: INFO: namespace: e2e-tests-var-expansion-9zbr4, resource: bindings, ignored listing per whitelist Dec 17 11:23:11.038: INFO: namespace e2e-tests-var-expansion-9zbr4 deletion completed in 6.659191801s • [SLOW TEST:18.009 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:23:11.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 17 11:23:11.423: INFO: Waiting up to 5m0s for pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-r2zdh" to be "success or failure" Dec 17 11:23:11.466: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 42.857209ms Dec 17 11:23:13.479: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056333613s Dec 17 11:23:15.504: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08129676s Dec 17 11:23:17.557: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134488179s Dec 17 11:23:19.573: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150671261s Dec 17 11:23:21.586: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16353135s STEP: Saw pod success Dec 17 11:23:21.587: INFO: Pod "pod-9bdf1602-20bf-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:23:21.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9bdf1602-20bf-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 11:23:22.712: INFO: Waiting for pod pod-9bdf1602-20bf-11ea-a5ef-0242ac110004 to disappear Dec 17 11:23:22.728: INFO: Pod pod-9bdf1602-20bf-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:23:22.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-r2zdh" for this suite. Dec 17 11:23:28.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:23:29.020: INFO: namespace: e2e-tests-emptydir-r2zdh, resource: bindings, ignored listing per whitelist Dec 17 11:23:29.047: INFO: namespace e2e-tests-emptydir-r2zdh deletion completed in 6.308019933s • [SLOW TEST:18.008 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:23:29.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:23:29.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Dec 17 11:23:29.415: INFO: stderr: "" Dec 17 11:23:29.415: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Dec 17 11:23:29.421: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:23:29.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rphkt" for this suite. Dec 17 11:23:35.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:23:35.633: INFO: namespace: e2e-tests-kubectl-rphkt, resource: bindings, ignored listing per whitelist Dec 17 11:23:35.636: INFO: namespace e2e-tests-kubectl-rphkt deletion completed in 6.200911578s S [SKIPPING] [6.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:23:29.421: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:23:35.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nxtwz Dec 17 11:23:47.927: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nxtwz STEP: checking the pod's current state and verifying that restartCount is present Dec 17 11:23:47.935: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:27:49.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nxtwz" for this suite. Dec 17 11:27:55.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:27:55.311: INFO: namespace: e2e-tests-container-probe-nxtwz, resource: bindings, ignored listing per whitelist Dec 17 11:27:55.330: INFO: namespace e2e-tests-container-probe-nxtwz deletion completed in 6.253377224s • [SLOW TEST:259.693 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:27:55.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-2jdhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2jdhb to expose endpoints map[] Dec 17 11:27:55.790: INFO: Get endpoints failed (19.061952ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 17 11:27:56.808: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2jdhb exposes endpoints map[] (1.03648706s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2jdhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2jdhb to expose endpoints map[pod1:[80]] Dec 17 11:28:01.376: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.52694396s elapsed, will retry) Dec 17 11:28:07.162: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.312555574s elapsed, will retry) Dec 17 11:28:09.334: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2jdhb exposes endpoints map[pod1:[80]] (12.484923727s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2jdhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2jdhb to expose endpoints map[pod1:[80] pod2:[80]] Dec 17 11:28:14.527: INFO: Unexpected endpoints: found map[45fcf942-20c0-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.176621129s elapsed, will retry) Dec 17 11:28:19.856: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2jdhb exposes endpoints map[pod1:[80] pod2:[80]] (10.505240563s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2jdhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2jdhb to expose endpoints map[pod2:[80]] Dec 17 11:28:21.234: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2jdhb exposes endpoints map[pod2:[80]] (1.359726815s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2jdhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2jdhb to expose endpoints map[] Dec 17 11:28:22.558: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2jdhb exposes endpoints map[] (1.300982372s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:28:23.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2jdhb" for this suite. Dec 17 11:28:47.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:28:47.480: INFO: namespace: e2e-tests-services-2jdhb, resource: bindings, ignored listing per whitelist Dec 17 11:28:47.516: INFO: namespace e2e-tests-services-2jdhb deletion completed in 24.397685523s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:52.186 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:28:47.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 17 11:28:47.768: INFO: Waiting up to 5m0s for pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-xt99t" to be "success or failure" Dec 17 11:28:47.915: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 146.026592ms Dec 17 11:28:50.581: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812814571s Dec 17 11:28:52.609: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.840385671s Dec 17 11:28:54.659: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.89071345s Dec 17 11:28:56.707: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938287504s Dec 17 11:28:58.727: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.958885247s Dec 17 11:29:00.747: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.978699832s STEP: Saw pod success Dec 17 11:29:00.747: INFO: Pod "pod-64584b1c-20c0-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:29:00.753: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-64584b1c-20c0-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 11:29:01.137: INFO: Waiting for pod pod-64584b1c-20c0-11ea-a5ef-0242ac110004 to disappear Dec 17 11:29:01.434: INFO: Pod pod-64584b1c-20c0-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:29:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xt99t" for this suite. Dec 17 11:29:07.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:29:07.978: INFO: namespace: e2e-tests-emptydir-xt99t, resource: bindings, ignored listing per whitelist Dec 17 11:29:08.032: INFO: namespace e2e-tests-emptydir-xt99t deletion completed in 6.563677345s • [SLOW TEST:20.515 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:29:08.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 17 11:29:08.220: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 17 11:29:08.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:10.950: INFO: stderr: "" Dec 17 11:29:10.950: INFO: stdout: "service/redis-slave created\n" Dec 17 11:29:10.951: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 17 11:29:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:11.581: INFO: stderr: "" Dec 17 11:29:11.581: INFO: stdout: "service/redis-master created\n" Dec 17 11:29:11.582: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 17 11:29:11.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:12.224: INFO: stderr: "" Dec 17 11:29:12.224: INFO: stdout: "service/frontend created\n" Dec 17 11:29:12.226: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 17 11:29:12.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:12.817: INFO: stderr: "" Dec 17 11:29:12.817: INFO: stdout: "deployment.extensions/frontend created\n" Dec 17 11:29:12.818: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 17 11:29:12.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:13.578: INFO: stderr: "" Dec 17 11:29:13.578: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 17 11:29:13.580: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 17 11:29:13.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:15.488: INFO: stderr: "" Dec 17 11:29:15.488: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 17 11:29:15.488: INFO: Waiting for all frontend pods to be Running. Dec 17 11:29:45.543: INFO: Waiting for frontend to serve content. Dec 17 11:29:46.232: INFO: Trying to add a new entry to the guestbook. Dec 17 11:29:46.346: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 17 11:29:46.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:46.849: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:46.849: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 17 11:29:46.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:47.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:47.107: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 17 11:29:47.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:47.263: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:47.263: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 17 11:29:47.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:47.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:47.470: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 17 11:29:47.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:47.681: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:47.681: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 17 11:29:47.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xs2x7' Dec 17 11:29:48.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 11:29:48.115: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:29:48.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xs2x7" for this suite. Dec 17 11:30:34.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:30:34.621: INFO: namespace: e2e-tests-kubectl-xs2x7, resource: bindings, ignored listing per whitelist Dec 17 11:30:34.644: INFO: namespace e2e-tests-kubectl-xs2x7 deletion completed in 46.412168964s • [SLOW TEST:86.612 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:30:34.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 17 11:30:55.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:30:55.359: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:30:57.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:30:57.720: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:30:59.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:30:59.377: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:01.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:01.377: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:03.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:03.379: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:05.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:05.377: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:07.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:07.370: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:09.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:09.379: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:11.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:11.381: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 11:31:13.360: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 11:31:13.384: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:31:13.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lftwg" for this suite. Dec 17 11:31:37.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:31:37.489: INFO: namespace: e2e-tests-container-lifecycle-hook-lftwg, resource: bindings, ignored listing per whitelist Dec 17 11:31:37.606: INFO: namespace e2e-tests-container-lifecycle-hook-lftwg deletion completed in 24.211376431s • [SLOW TEST:62.962 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:31:37.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 17 11:31:48.414: INFO: Successfully updated pod "pod-update-c9a874aa-20c0-11ea-a5ef-0242ac110004" STEP: verifying the updated pod is in kubernetes Dec 17 11:31:48.442: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:31:48.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-p44rs" for this suite. Dec 17 11:32:12.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:32:12.604: INFO: namespace: e2e-tests-pods-p44rs, resource: bindings, ignored listing per whitelist Dec 17 11:32:12.682: INFO: namespace e2e-tests-pods-p44rs deletion completed in 24.22990758s • [SLOW TEST:35.075 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:32:12.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-de9d7d01-20c0-11ea-a5ef-0242ac110004 STEP: Creating secret with name s-test-opt-upd-de9d7e56-20c0-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-de9d7d01-20c0-11ea-a5ef-0242ac110004 STEP: Updating secret s-test-opt-upd-de9d7e56-20c0-11ea-a5ef-0242ac110004 STEP: Creating secret with name s-test-opt-create-de9d7eae-20c0-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:33:58.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vdxgf" for this suite. Dec 17 11:34:24.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:34:24.496: INFO: namespace: e2e-tests-projected-vdxgf, resource: bindings, ignored listing per whitelist Dec 17 11:34:24.582: INFO: namespace e2e-tests-projected-vdxgf deletion completed in 26.296052772s • [SLOW TEST:131.899 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:34:24.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-k265r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-k265r to expose endpoints map[] Dec 17 11:34:24.974: INFO: Get endpoints failed (17.725973ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 17 11:34:25.989: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-k265r exposes endpoints map[] (1.032594093s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-k265r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-k265r to expose endpoints map[pod1:[100]] Dec 17 11:34:30.252: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.24357892s elapsed, will retry) Dec 17 11:34:35.354: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.345684046s elapsed, will retry) Dec 17 11:34:36.383: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-k265r exposes endpoints map[pod1:[100]] (10.375377228s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-k265r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-k265r to expose endpoints map[pod1:[100] pod2:[101]] Dec 17 11:34:40.762: INFO: Unexpected endpoints: found map[2df528d5-20c1-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.346701281s elapsed, will retry) Dec 17 11:34:46.570: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-k265r exposes endpoints map[pod1:[100] pod2:[101]] (10.153988528s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-k265r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-k265r to expose endpoints map[pod2:[101]] Dec 17 11:34:47.810: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-k265r exposes endpoints map[pod2:[101]] (1.211758617s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-k265r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-k265r to expose endpoints map[] Dec 17 11:34:49.381: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-k265r exposes endpoints map[] (1.549285022s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:34:49.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-k265r" for this suite. Dec 17 11:35:13.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:35:13.881: INFO: namespace: e2e-tests-services-k265r, resource: bindings, ignored listing per whitelist Dec 17 11:35:13.957: INFO: namespace e2e-tests-services-k265r deletion completed in 24.281523928s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:49.375 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:35:13.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 11:35:14.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-4xgpc' Dec 17 11:35:14.462: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 11:35:14.463: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 17 11:35:14.627: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-vvzfh] Dec 17 11:35:14.627: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-vvzfh" in namespace "e2e-tests-kubectl-4xgpc" to be "running and ready" Dec 17 11:35:14.643: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.222802ms Dec 17 11:35:16.662: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03453975s Dec 17 11:35:18.704: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076624652s Dec 17 11:35:20.737: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109615063s Dec 17 11:35:22.769: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141553081s Dec 17 11:35:24.792: INFO: Pod "e2e-test-nginx-rc-vvzfh": Phase="Running", Reason="", readiness=true. Elapsed: 10.165106209s Dec 17 11:35:24.793: INFO: Pod "e2e-test-nginx-rc-vvzfh" satisfied condition "running and ready" Dec 17 11:35:24.793: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-vvzfh] Dec 17 11:35:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4xgpc' Dec 17 11:35:25.086: INFO: stderr: "" Dec 17 11:35:25.086: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Dec 17 11:35:25.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4xgpc' Dec 17 11:35:25.216: INFO: stderr: "" Dec 17 11:35:25.216: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:35:25.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4xgpc" for this suite. Dec 17 11:35:47.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:35:47.628: INFO: namespace: e2e-tests-kubectl-4xgpc, resource: bindings, ignored listing per whitelist Dec 17 11:35:47.784: INFO: namespace e2e-tests-kubectl-4xgpc deletion completed in 22.398503403s • [SLOW TEST:33.827 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:35:47.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 17 11:35:48.075: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 17 11:35:53.103: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:35:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-kxsq9" for this suite. Dec 17 11:36:07.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:36:08.079: INFO: namespace: e2e-tests-replication-controller-kxsq9, resource: bindings, ignored listing per whitelist Dec 17 11:36:08.079: INFO: namespace e2e-tests-replication-controller-kxsq9 deletion completed in 13.259510693s • [SLOW TEST:20.294 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:36:08.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6aec42b5-20c1-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 11:36:08.398: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-q6x44" to be "success or failure" Dec 17 11:36:08.428: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 29.422498ms Dec 17 11:36:10.453: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054950933s Dec 17 11:36:12.487: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089209364s Dec 17 11:36:15.221: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.822301847s Dec 17 11:36:17.281: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.883028926s Dec 17 11:36:19.305: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907185415s STEP: Saw pod success Dec 17 11:36:19.305: INFO: Pod "pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:36:19.319: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 17 11:36:19.841: INFO: Waiting for pod pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004 to disappear Dec 17 11:36:20.105: INFO: Pod pod-projected-configmaps-6aed80f1-20c1-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:36:20.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q6x44" for this suite. Dec 17 11:36:26.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:36:26.416: INFO: namespace: e2e-tests-projected-q6x44, resource: bindings, ignored listing per whitelist Dec 17 11:36:26.460: INFO: namespace e2e-tests-projected-q6x44 deletion completed in 6.308566584s • [SLOW TEST:18.381 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:36:26.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9dg6w Dec 17 11:36:36.884: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9dg6w STEP: checking the pod's current state and verifying that restartCount is present Dec 17 11:36:36.896: INFO: Initial restart count of pod liveness-http is 0 Dec 17 11:36:59.262: INFO: Restart count of pod e2e-tests-container-probe-9dg6w/liveness-http is now 1 (22.366548102s elapsed) Dec 17 11:37:17.653: INFO: Restart count of pod e2e-tests-container-probe-9dg6w/liveness-http is now 2 (40.757036275s elapsed) Dec 17 11:37:39.883: INFO: Restart count of pod e2e-tests-container-probe-9dg6w/liveness-http is now 3 (1m2.986818715s elapsed) Dec 17 11:37:58.842: INFO: Restart count of pod e2e-tests-container-probe-9dg6w/liveness-http is now 4 (1m21.945955406s elapsed) Dec 17 11:38:59.647: INFO: Restart count of pod e2e-tests-container-probe-9dg6w/liveness-http is now 5 (2m22.751305189s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:38:59.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9dg6w" for this suite. Dec 17 11:39:05.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:39:05.986: INFO: namespace: e2e-tests-container-probe-9dg6w, resource: bindings, ignored listing per whitelist Dec 17 11:39:06.009: INFO: namespace e2e-tests-container-probe-9dg6w deletion completed in 6.255245175s • [SLOW TEST:159.548 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:39:06.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 17 11:39:28.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:28.685: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:30.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:30.706: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:32.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:32.718: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:34.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:34.723: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:36.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:36.707: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:38.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:38.703: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:40.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:40.702: INFO: Pod pod-with-prestop-http-hook still exists Dec 17 11:39:42.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 17 11:39:42.732: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:39:42.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-489cx" for this suite. Dec 17 11:40:06.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:40:06.919: INFO: namespace: e2e-tests-container-lifecycle-hook-489cx, resource: bindings, ignored listing per whitelist Dec 17 11:40:06.951: INFO: namespace e2e-tests-container-lifecycle-hook-489cx deletion completed in 24.171725588s • [SLOW TEST:60.942 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:40:06.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 17 11:40:35.652: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:35.653: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:36.149: INFO: Exec stderr: "" Dec 17 11:40:36.149: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:36.149: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:36.571: INFO: Exec stderr: "" Dec 17 11:40:36.571: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:36.571: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:36.941: INFO: Exec stderr: "" Dec 17 11:40:36.941: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:36.941: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:37.484: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 17 11:40:37.484: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:37.485: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:37.960: INFO: Exec stderr: "" Dec 17 11:40:37.960: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:37.960: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:38.408: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 17 11:40:38.408: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:38.409: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:38.777: INFO: Exec stderr: "" Dec 17 11:40:38.778: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:38.778: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:39.133: INFO: Exec stderr: "" Dec 17 11:40:39.133: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:39.134: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:39.522: INFO: Exec stderr: "" Dec 17 11:40:39.522: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7jt5h PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:40:39.522: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:40:39.892: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:40:39.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7jt5h" for this suite. Dec 17 11:41:28.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:41:28.210: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7jt5h, resource: bindings, ignored listing per whitelist Dec 17 11:41:28.217: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7jt5h deletion completed in 48.301467711s • [SLOW TEST:81.266 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:41:28.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-blfxx STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 17 11:41:28.515: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 17 11:42:04.959: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-blfxx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 11:42:04.960: INFO: >>> kubeConfig: /root/.kube/config Dec 17 11:42:06.461: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:42:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-blfxx" for this suite. Dec 17 11:42:30.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:42:30.764: INFO: namespace: e2e-tests-pod-network-test-blfxx, resource: bindings, ignored listing per whitelist Dec 17 11:42:30.847: INFO: namespace e2e-tests-pod-network-test-blfxx deletion completed in 24.336346949s • [SLOW TEST:62.629 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:42:30.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 11:42:31.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-c8k9v" to be "success or failure" Dec 17 11:42:31.097: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.405812ms Dec 17 11:42:33.135: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049520368s Dec 17 11:42:35.155: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069147196s Dec 17 11:42:37.173: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086779213s Dec 17 11:42:39.245: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159591598s Dec 17 11:42:41.263: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177233653s STEP: Saw pod success Dec 17 11:42:41.263: INFO: Pod "downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:42:41.278: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 11:42:41.396: INFO: Waiting for pod downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004 to disappear Dec 17 11:42:41.404: INFO: Pod downwardapi-volume-4f1452ac-20c2-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:42:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c8k9v" for this suite. Dec 17 11:42:47.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:42:47.676: INFO: namespace: e2e-tests-projected-c8k9v, resource: bindings, ignored listing per whitelist Dec 17 11:42:47.701: INFO: namespace e2e-tests-projected-c8k9v deletion completed in 6.289800514s • [SLOW TEST:16.853 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:42:47.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6xn4x [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6xn4x STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6xn4x Dec 17 11:42:48.166: INFO: Found 0 stateful pods, waiting for 1 Dec 17 11:42:58.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 11:43:08.188: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 17 11:43:08.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 11:43:09.002: INFO: stderr: "" Dec 17 11:43:09.002: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 11:43:09.002: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 11:43:09.020: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 17 11:43:19.029: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 11:43:19.029: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 11:43:19.061: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:43:19.061: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:43:19.061: INFO: Dec 17 11:43:19.061: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 17 11:43:20.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985840656s Dec 17 11:43:21.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.939166228s Dec 17 11:43:22.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.728203255s Dec 17 11:43:23.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.452639785s Dec 17 11:43:24.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.337905176s Dec 17 11:43:25.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.325356273s Dec 17 11:43:27.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.303338781s Dec 17 11:43:28.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.045121305s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6xn4x Dec 17 11:43:30.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:43:32.134: INFO: stderr: "" Dec 17 11:43:32.134: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 11:43:32.134: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 11:43:32.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:43:32.418: INFO: rc: 1 Dec 17 11:43:32.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000aba660 exit status 1 true [0xc001ed02b0 0xc001ed02c8 0xc001ed02e0] [0xc001ed02b0 0xc001ed02c8 0xc001ed02e0] [0xc001ed02c0 0xc001ed02d8] [0x935700 0x935700] 0xc0013d2ea0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 17 11:43:42.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:43:43.160: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 17 11:43:43.160: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 11:43:43.160: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 11:43:43.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:43:43.733: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Dec 17 11:43:43.733: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 11:43:43.733: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 11:43:43.759: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:43:43.760: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 11:43:43.760: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 17 11:43:43.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 11:43:44.232: INFO: stderr: "" Dec 17 11:43:44.232: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 11:43:44.232: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 11:43:44.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 11:43:44.937: INFO: stderr: "" Dec 17 11:43:44.937: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 11:43:44.937: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 11:43:44.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 11:43:45.458: INFO: stderr: "" Dec 17 11:43:45.458: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 11:43:45.458: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 11:43:45.458: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 11:43:45.469: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 17 11:43:55.545: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 11:43:55.545: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 17 11:43:55.545: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 17 11:43:55.693: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:43:55.693: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:43:55.693: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:55.693: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:55.693: INFO: Dec 17 11:43:55.693: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:43:56.703: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:43:56.703: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:43:56.703: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:56.703: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:56.703: INFO: Dec 17 11:43:56.703: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:43:58.178: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:43:58.178: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:43:58.179: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:58.179: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:58.179: INFO: Dec 17 11:43:58.179: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:43:59.230: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:43:59.230: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:43:59.231: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:59.231: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:43:59.231: INFO: Dec 17 11:43:59.231: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:44:01.086: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:44:01.086: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:44:01.086: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:01.086: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:01.086: INFO: Dec 17 11:44:01.086: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:44:02.201: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:44:02.201: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:44:02.201: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:02.201: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:02.201: INFO: Dec 17 11:44:02.201: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:44:03.262: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:44:03.262: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:44:03.263: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:03.263: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:03.263: INFO: Dec 17 11:44:03.263: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:44:04.339: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:44:04.339: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:44:04.339: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:04.339: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:04.339: INFO: Dec 17 11:44:04.339: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 11:44:05.512: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 11:44:05.512: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:42:48 +0000 UTC }] Dec 17 11:44:05.512: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:05.512: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 11:43:19 +0000 UTC }] Dec 17 11:44:05.512: INFO: Dec 17 11:44:05.512: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6xn4x Dec 17 11:44:06.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:06.816: INFO: rc: 1 Dec 17 11:44:06.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000dbd980 exit status 1 true [0xc001ed0548 0xc001ed0560 0xc001ed0578] [0xc001ed0548 0xc001ed0560 0xc001ed0578] [0xc001ed0558 0xc001ed0570] [0x935700 0x935700] 0xc001283080 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 17 11:44:16.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:17.019: INFO: rc: 1 Dec 17 11:44:17.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000dbdaa0 exit status 1 true [0xc001ed0580 0xc001ed0598 0xc001ed05b0] [0xc001ed0580 0xc001ed0598 0xc001ed05b0] [0xc001ed0590 0xc001ed05a8] [0x935700 0x935700] 0xc001283e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:44:27.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:27.178: INFO: rc: 1 Dec 17 11:44:27.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8d9e0 exit status 1 true [0xc00016ad90 0xc00016adc0 0xc00016ade0] [0xc00016ad90 0xc00016adc0 0xc00016ade0] [0xc00016adb0 0xc00016add0] [0x935700 0x935700] 0xc000c83f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:44:37.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:37.316: INFO: rc: 1 Dec 17 11:44:37.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8db30 exit status 1 true [0xc00016adf8 0xc00016ae20 0xc00016ae38] [0xc00016adf8 0xc00016ae20 0xc00016ae38] [0xc00016ae18 0xc00016ae30] [0x935700 0x935700] 0xc0012e63c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:44:47.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:47.473: INFO: rc: 1 Dec 17 11:44:47.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044330 exit status 1 true [0xc000304360 0xc000304448 0xc0003044e8] [0xc000304360 0xc000304448 0xc0003044e8] [0xc0003043c8 0xc0003044c8] [0x935700 0x935700] 0xc000bf62a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:44:57.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:44:57.628: INFO: rc: 1 Dec 17 11:44:57.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010444b0 exit status 1 true [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304580 0xc0003045d8] [0x935700 0x935700] 0xc000bf70e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:07.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:07.801: INFO: rc: 1 Dec 17 11:45:07.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c120 exit status 1 true [0xc000c0c000 0xc000c0c018 0xc000c0c030] [0xc000c0c000 0xc000c0c018 0xc000c0c030] [0xc000c0c010 0xc000c0c028] [0x935700 0x935700] 0xc000c82f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:17.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:18.056: INFO: rc: 1 Dec 17 11:45:18.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000234ff0 exit status 1 true [0xc00016a000 0xc00016a140 0xc00016a190] [0xc00016a000 0xc00016a140 0xc00016a190] [0xc00016a120 0xc00016a188] [0x935700 0x935700] 0xc000e4a6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:28.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:28.222: INFO: rc: 1 Dec 17 11:45:28.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba120 exit status 1 true [0xc001bfa000 0xc001bfa018 0xc001bfa030] [0xc001bfa000 0xc001bfa018 0xc001bfa030] [0xc001bfa010 0xc001bfa028] [0x935700 0x935700] 0xc0013d2480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:38.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:38.379: INFO: rc: 1 Dec 17 11:45:38.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba210 exit status 1 true [0xc001bfa038 0xc001bfa050 0xc001bfa068] [0xc001bfa038 0xc001bfa050 0xc001bfa068] [0xc001bfa048 0xc001bfa060] [0x935700 0x935700] 0xc0013d2780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:48.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:48.614: INFO: rc: 1 Dec 17 11:45:48.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba330 exit status 1 true [0xc001bfa070 0xc001bfa088 0xc001bfa0a0] [0xc001bfa070 0xc001bfa088 0xc001bfa0a0] [0xc001bfa080 0xc001bfa098] [0x935700 0x935700] 0xc0013d2c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:45:58.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:45:58.865: INFO: rc: 1 Dec 17 11:45:58.865: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c390 exit status 1 true [0xc000c0c038 0xc000c0c050 0xc000c0c068] [0xc000c0c038 0xc000c0c050 0xc000c0c068] [0xc000c0c048 0xc000c0c060] [0x935700 0x935700] 0xc000c83380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:08.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:09.010: INFO: rc: 1 Dec 17 11:46:09.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002351d0 exit status 1 true [0xc00016a198 0xc00016a1f0 0xc00016a238] [0xc00016a198 0xc00016a1f0 0xc00016a238] [0xc00016a1c8 0xc00016a228] [0x935700 0x935700] 0xc000e4b6e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:19.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:19.114: INFO: rc: 1 Dec 17 11:46:19.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c4e0 exit status 1 true [0xc000c0c070 0xc000c0c088 0xc000c0c0a0] [0xc000c0c070 0xc000c0c088 0xc000c0c0a0] [0xc000c0c080 0xc000c0c098] [0x935700 0x935700] 0xc000c838c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:29.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:29.257: INFO: rc: 1 Dec 17 11:46:29.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044630 exit status 1 true [0xc0003045e8 0xc000304658 0xc000304680] [0xc0003045e8 0xc000304658 0xc000304680] [0xc000304640 0xc000304670] [0x935700 0x935700] 0xc000bf7bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:39.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:39.468: INFO: rc: 1 Dec 17 11:46:39.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147cba0 exit status 1 true [0xc000c0c0a8 0xc000c0c0c0 0xc000c0c0d8] [0xc000c0c0a8 0xc000c0c0c0 0xc000c0c0d8] [0xc000c0c0b8 0xc000c0c0d0] [0x935700 0x935700] 0xc000c83ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:49.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:49.613: INFO: rc: 1 Dec 17 11:46:49.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044360 exit status 1 true [0xc000304360 0xc000304448 0xc0003044e8] [0xc000304360 0xc000304448 0xc0003044e8] [0xc0003043c8 0xc0003044c8] [0x935700 0x935700] 0xc000e4a6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:46:59.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:46:59.754: INFO: rc: 1 Dec 17 11:46:59.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba150 exit status 1 true [0xc000c0c000 0xc000c0c018 0xc000c0c030] [0xc000c0c000 0xc000c0c018 0xc000c0c030] [0xc000c0c010 0xc000c0c028] [0x935700 0x935700] 0xc000bf62a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:47:09.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:47:09.942: INFO: rc: 1 Dec 17 11:47:09.942: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba2a0 exit status 1 true [0xc000c0c038 0xc000c0c050 0xc000c0c068] [0xc000c0c038 0xc000c0c050 0xc000c0c068] [0xc000c0c048 0xc000c0c060] [0x935700 0x935700] 0xc000bf70e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:47:19.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:47:20.115: INFO: rc: 1 Dec 17 11:47:20.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000235080 exit status 1 true [0xc001bfa000 0xc001bfa018 0xc001bfa030] [0xc001bfa000 0xc001bfa018 0xc001bfa030] [0xc001bfa010 0xc001bfa028] [0x935700 0x935700] 0xc0013d2480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:47:30.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:47:30.311: INFO: rc: 1 Dec 17 11:47:30.311: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c150 exit status 1 true [0xc00016a000 0xc00016a140 0xc00016a190] [0xc00016a000 0xc00016a140 0xc00016a190] [0xc00016a120 0xc00016a188] [0x935700 0x935700] 0xc000c82f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:47:40.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:47:40.514: INFO: rc: 1 Dec 17 11:47:40.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba450 exit status 1 true [0xc000c0c070 0xc000c0c088 0xc000c0c0a0] [0xc000c0c070 0xc000c0c088 0xc000c0c0a0] [0xc000c0c080 0xc000c0c098] [0x935700 0x935700] 0xc000bf7bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:47:50.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:47:50.675: INFO: rc: 1 Dec 17 11:47:50.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba5a0 exit status 1 true [0xc000c0c0a8 0xc000c0c0c0 0xc000c0c0d8] [0xc000c0c0a8 0xc000c0c0c0 0xc000c0c0d8] [0xc000c0c0b8 0xc000c0c0d0] [0x935700 0x935700] 0xc001cdc480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:00.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:00.802: INFO: rc: 1 Dec 17 11:48:00.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044540 exit status 1 true [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304580 0xc0003045d8] [0x935700 0x935700] 0xc000e4b6e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:10.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:10.999: INFO: rc: 1 Dec 17 11:48:11.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044690 exit status 1 true [0xc0003045e8 0xc000304658 0xc000304680] [0xc0003045e8 0xc000304658 0xc000304680] [0xc000304640 0xc000304670] [0x935700 0x935700] 0xc0019b6660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:21.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:21.114: INFO: rc: 1 Dec 17 11:48:21.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c3c0 exit status 1 true [0xc00016a198 0xc00016a1f0 0xc00016a238] [0xc00016a198 0xc00016a1f0 0xc00016a238] [0xc00016a1c8 0xc00016a228] [0x935700 0x935700] 0xc000c83380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:31.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:31.260: INFO: rc: 1 Dec 17 11:48:31.260: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147c540 exit status 1 true [0xc00016a248 0xc00016a288 0xc00016a308] [0xc00016a248 0xc00016a288 0xc00016a308] [0xc00016a278 0xc00016a2e8] [0x935700 0x935700] 0xc000c838c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:41.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:41.431: INFO: rc: 1 Dec 17 11:48:41.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00147ccf0 exit status 1 true [0xc00016a320 0xc00016a358 0xc00016a3f0] [0xc00016a320 0xc00016a358 0xc00016a3f0] [0xc00016a338 0xc00016a3e8] [0x935700 0x935700] 0xc000c83ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:48:51.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:48:51.598: INFO: rc: 1 Dec 17 11:48:51.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aba120 exit status 1 true [0xc000c0c008 0xc000c0c020 0xc000c0c038] [0xc000c0c008 0xc000c0c020 0xc000c0c038] [0xc000c0c018 0xc000c0c030] [0x935700 0x935700] 0xc000bf62a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:49:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:49:01.750: INFO: rc: 1 Dec 17 11:49:01.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001044330 exit status 1 true [0xc0003041e8 0xc0003043c8 0xc0003044c8] [0xc0003041e8 0xc0003043c8 0xc0003044c8] [0xc0003043a8 0xc000304460] [0x935700 0x935700] 0xc000e4a6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 17 11:49:11.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6xn4x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 11:49:11.984: INFO: rc: 1 Dec 17 11:49:11.985: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 17 11:49:11.985: INFO: Scaling statefulset ss to 0 Dec 17 11:49:12.003: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 17 11:49:12.006: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6xn4x Dec 17 11:49:12.009: INFO: Scaling statefulset ss to 0 Dec 17 11:49:12.019: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 11:49:12.021: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:49:12.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6xn4x" for this suite. Dec 17 11:49:20.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:49:20.158: INFO: namespace: e2e-tests-statefulset-6xn4x, resource: bindings, ignored listing per whitelist Dec 17 11:49:20.384: INFO: namespace e2e-tests-statefulset-6xn4x deletion completed in 8.328377658s • [SLOW TEST:392.683 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:49:20.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Dec 17 11:49:20.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-fc5bs run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 17 11:49:34.413: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 17 11:49:34.414: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:49:37.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fc5bs" for this suite. Dec 17 11:49:44.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:49:44.413: INFO: namespace: e2e-tests-kubectl-fc5bs, resource: bindings, ignored listing per whitelist Dec 17 11:49:44.414: INFO: namespace e2e-tests-kubectl-fc5bs deletion completed in 6.281918589s • [SLOW TEST:24.029 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:49:44.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:49:44.774: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:49:54.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gctv2" for this suite. Dec 17 11:50:42.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:50:43.066: INFO: namespace: e2e-tests-pods-gctv2, resource: bindings, ignored listing per whitelist Dec 17 11:50:43.173: INFO: namespace e2e-tests-pods-gctv2 deletion completed in 48.223203222s • [SLOW TEST:58.759 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:50:43.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7485b65b-20c3-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 11:50:43.642: INFO: Waiting up to 5m0s for pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-x4hcw" to be "success or failure" Dec 17 11:50:43.736: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 93.650118ms Dec 17 11:50:45.795: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152096404s Dec 17 11:50:47.839: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195998948s Dec 17 11:50:49.870: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227789891s Dec 17 11:50:51.892: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249883599s Dec 17 11:50:53.913: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.270118986s Dec 17 11:50:55.942: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.299314285s STEP: Saw pod success Dec 17 11:50:55.942: INFO: Pod "pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:50:55.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 17 11:50:56.309: INFO: Waiting for pod pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004 to disappear Dec 17 11:50:56.350: INFO: Pod pod-secrets-74a9b945-20c3-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:50:56.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x4hcw" for this suite. Dec 17 11:51:02.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:51:02.565: INFO: namespace: e2e-tests-secrets-x4hcw, resource: bindings, ignored listing per whitelist Dec 17 11:51:02.844: INFO: namespace e2e-tests-secrets-x4hcw deletion completed in 6.433289869s STEP: Destroying namespace "e2e-tests-secret-namespace-c8kl8" for this suite. Dec 17 11:51:08.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:51:09.192: INFO: namespace: e2e-tests-secret-namespace-c8kl8, resource: bindings, ignored listing per whitelist Dec 17 11:51:09.259: INFO: namespace e2e-tests-secret-namespace-c8kl8 deletion completed in 6.414970615s • [SLOW TEST:26.086 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:51:09.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:51:09.442: INFO: Creating ReplicaSet my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004 Dec 17 11:51:09.508: INFO: Pod name my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004: Found 0 pods out of 1 Dec 17 11:51:14.540: INFO: Pod name my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004: Found 1 pods out of 1 Dec 17 11:51:14.540: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004" is running Dec 17 11:51:20.633: INFO: Pod "my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004-t5qv4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 11:51:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 11:51:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 11:51:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 11:51:09 +0000 UTC Reason: Message:}]) Dec 17 11:51:20.633: INFO: Trying to dial the pod Dec 17 11:51:25.682: INFO: Controller my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004: Got expected result from replica 1 [my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004-t5qv4]: "my-hostname-basic-840da910-20c3-11ea-a5ef-0242ac110004-t5qv4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:51:25.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6lf6m" for this suite. Dec 17 11:51:31.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:51:31.872: INFO: namespace: e2e-tests-replicaset-6lf6m, resource: bindings, ignored listing per whitelist Dec 17 11:51:31.910: INFO: namespace e2e-tests-replicaset-6lf6m deletion completed in 6.213305268s • [SLOW TEST:22.651 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:51:31.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 17 11:51:32.274: INFO: Waiting up to 5m0s for pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-zwbxz" to be "success or failure" Dec 17 11:51:32.287: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.617116ms Dec 17 11:51:34.347: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073028217s Dec 17 11:51:36.887: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612547259s Dec 17 11:51:39.673: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.398389648s Dec 17 11:51:41.736: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.462136339s Dec 17 11:51:43.771: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.497131533s Dec 17 11:51:45.795: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.521005687s STEP: Saw pod success Dec 17 11:51:45.795: INFO: Pod "downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:51:45.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004 container dapi-container: STEP: delete the pod Dec 17 11:51:46.060: INFO: Waiting for pod downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004 to disappear Dec 17 11:51:46.084: INFO: Pod downward-api-91a6e6b4-20c3-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:51:46.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zwbxz" for this suite. Dec 17 11:51:52.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:51:52.321: INFO: namespace: e2e-tests-downward-api-zwbxz, resource: bindings, ignored listing per whitelist Dec 17 11:51:52.376: INFO: namespace e2e-tests-downward-api-zwbxz deletion completed in 6.274052928s • [SLOW TEST:20.465 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:51:52.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gkdqk Dec 17 11:52:02.832: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gkdqk STEP: checking the pod's current state and verifying that restartCount is present Dec 17 11:52:02.836: INFO: Initial restart count of pod liveness-http is 0 Dec 17 11:52:31.633: INFO: Restart count of pod e2e-tests-container-probe-gkdqk/liveness-http is now 1 (28.796768694s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:52:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gkdqk" for this suite. Dec 17 11:52:37.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:52:38.031: INFO: namespace: e2e-tests-container-probe-gkdqk, resource: bindings, ignored listing per whitelist Dec 17 11:52:38.244: INFO: namespace e2e-tests-container-probe-gkdqk deletion completed in 6.38317279s • [SLOW TEST:45.867 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:52:38.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:52:51.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2bfhk" for this suite. Dec 17 11:52:57.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:52:57.368: INFO: namespace: e2e-tests-emptydir-wrapper-2bfhk, resource: bindings, ignored listing per whitelist Dec 17 11:52:57.450: INFO: namespace e2e-tests-emptydir-wrapper-2bfhk deletion completed in 6.274559955s • [SLOW TEST:19.206 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:52:57.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-c4997110-20c3-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 11:52:57.769: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-tv6hl" to be "success or failure" Dec 17 11:52:57.779: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.501784ms Dec 17 11:53:00.232: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463751022s Dec 17 11:53:02.259: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490220981s Dec 17 11:53:04.334: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565090614s Dec 17 11:53:06.354: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585400357s Dec 17 11:53:08.373: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.604218527s STEP: Saw pod success Dec 17 11:53:08.373: INFO: Pod "pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:53:08.380: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 17 11:53:08.575: INFO: Waiting for pod pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004 to disappear Dec 17 11:53:08.587: INFO: Pod pod-projected-secrets-c49cd170-20c3-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:53:08.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tv6hl" for this suite. Dec 17 11:53:14.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:53:14.785: INFO: namespace: e2e-tests-projected-tv6hl, resource: bindings, ignored listing per whitelist Dec 17 11:53:14.824: INFO: namespace e2e-tests-projected-tv6hl deletion completed in 6.227443393s • [SLOW TEST:17.374 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:53:14.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:53:45.142: INFO: Container started at 2019-12-17 11:53:22 +0000 UTC, pod became ready at 2019-12-17 11:53:44 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:53:45.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vxkdg" for this suite. Dec 17 11:54:09.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:54:09.339: INFO: namespace: e2e-tests-container-probe-vxkdg, resource: bindings, ignored listing per whitelist Dec 17 11:54:09.423: INFO: namespace e2e-tests-container-probe-vxkdg deletion completed in 24.26864674s • [SLOW TEST:54.598 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:54:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 17 11:54:09.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119168,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 17 11:54:09.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119168,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 17 11:54:19.681: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119180,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 17 11:54:19.682: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119180,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 17 11:54:29.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119193,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 17 11:54:29.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119193,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 17 11:54:39.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119206,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 17 11:54:39.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-a,UID:ef76e402-20c3-11ea-a994-fa163e34d433,ResourceVersion:15119206,Generation:0,CreationTimestamp:2019-12-17 11:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 17 11:54:49.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-b,UID:0760a4a4-20c4-11ea-a994-fa163e34d433,ResourceVersion:15119219,Generation:0,CreationTimestamp:2019-12-17 11:54:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 17 11:54:49.848: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-b,UID:0760a4a4-20c4-11ea-a994-fa163e34d433,ResourceVersion:15119219,Generation:0,CreationTimestamp:2019-12-17 11:54:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 17 11:54:59.942: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-b,UID:0760a4a4-20c4-11ea-a994-fa163e34d433,ResourceVersion:15119231,Generation:0,CreationTimestamp:2019-12-17 11:54:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 17 11:54:59.942: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-57px7,SelfLink:/api/v1/namespaces/e2e-tests-watch-57px7/configmaps/e2e-watch-test-configmap-b,UID:0760a4a4-20c4-11ea-a994-fa163e34d433,ResourceVersion:15119231,Generation:0,CreationTimestamp:2019-12-17 11:54:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:55:09.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-57px7" for this suite. Dec 17 11:55:16.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:55:16.161: INFO: namespace: e2e-tests-watch-57px7, resource: bindings, ignored listing per whitelist Dec 17 11:55:16.177: INFO: namespace e2e-tests-watch-57px7 deletion completed in 6.21508315s • [SLOW TEST:66.754 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:55:16.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Dec 17 11:55:26.815: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-175e4af9-20c4-11ea-a5ef-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-j8wtv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-j8wtv/pods/pod-submit-remove-175e4af9-20c4-11ea-a5ef-0242ac110004", UID:"1764342c-20c4-11ea-a994-fa163e34d433", ResourceVersion:"15119279", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712180516, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"595611825"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-w252w", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b09840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w252w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b4b698), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0015d7320), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b4b6d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b4b6f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b4b6f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b4b6fc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712180516, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712180526, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712180526, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712180516, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00144b000), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00144b0a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://0cd6406472e8101a73367106fcc4fbbef4a7078b25324dbb5f4b24a950eb442c"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:55:42.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-j8wtv" for this suite. Dec 17 11:55:48.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:55:48.950: INFO: namespace: e2e-tests-pods-j8wtv, resource: bindings, ignored listing per whitelist Dec 17 11:55:49.106: INFO: namespace e2e-tests-pods-j8wtv deletion completed in 6.325400832s • [SLOW TEST:32.929 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:55:49.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:55:49.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-nlgx4" for this suite. Dec 17 11:55:55.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:55:55.722: INFO: namespace: e2e-tests-services-nlgx4, resource: bindings, ignored listing per whitelist Dec 17 11:55:55.817: INFO: namespace e2e-tests-services-nlgx4 deletion completed in 6.22099587s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.710 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:55:55.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-2ee8f986-20c4-11ea-a5ef-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-2ee8fa14-20c4-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2ee8f986-20c4-11ea-a5ef-0242ac110004 STEP: Updating configmap cm-test-opt-upd-2ee8fa14-20c4-11ea-a5ef-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-2ee8fa42-20c4-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:57:42.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gzxmh" for this suite. Dec 17 11:58:08.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:58:08.697: INFO: namespace: e2e-tests-projected-gzxmh, resource: bindings, ignored listing per whitelist Dec 17 11:58:08.819: INFO: namespace e2e-tests-projected-gzxmh deletion completed in 26.322576919s • [SLOW TEST:133.001 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:58:08.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 17 11:58:09.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwpkw' Dec 17 11:58:09.481: INFO: stderr: "" Dec 17 11:58:09.481: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 17 11:58:10.532: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:10.533: INFO: Found 0 / 1 Dec 17 11:58:11.913: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:11.913: INFO: Found 0 / 1 Dec 17 11:58:12.556: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:12.556: INFO: Found 0 / 1 Dec 17 11:58:13.495: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:13.495: INFO: Found 0 / 1 Dec 17 11:58:14.508: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:14.508: INFO: Found 0 / 1 Dec 17 11:58:16.179: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:16.179: INFO: Found 0 / 1 Dec 17 11:58:16.845: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:16.845: INFO: Found 0 / 1 Dec 17 11:58:17.511: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:17.511: INFO: Found 0 / 1 Dec 17 11:58:18.504: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:18.504: INFO: Found 0 / 1 Dec 17 11:58:19.499: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:19.500: INFO: Found 1 / 1 Dec 17 11:58:19.500: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 17 11:58:19.506: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:19.506: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 17 11:58:19.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-klkvl --namespace=e2e-tests-kubectl-nwpkw -p {"metadata":{"annotations":{"x":"y"}}}' Dec 17 11:58:19.823: INFO: stderr: "" Dec 17 11:58:19.824: INFO: stdout: "pod/redis-master-klkvl patched\n" STEP: checking annotations Dec 17 11:58:19.919: INFO: Selector matched 1 pods for map[app:redis] Dec 17 11:58:19.919: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:58:19.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nwpkw" for this suite. Dec 17 11:58:43.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:58:44.320: INFO: namespace: e2e-tests-kubectl-nwpkw, resource: bindings, ignored listing per whitelist Dec 17 11:58:44.354: INFO: namespace e2e-tests-kubectl-nwpkw deletion completed in 24.427536852s • [SLOW TEST:35.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:58:44.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:58:51.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-gdd2v" for this suite. Dec 17 11:58:57.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:58:57.466: INFO: namespace: e2e-tests-namespaces-gdd2v, resource: bindings, ignored listing per whitelist Dec 17 11:58:57.477: INFO: namespace e2e-tests-namespaces-gdd2v deletion completed in 6.277812315s STEP: Destroying namespace "e2e-tests-nsdeletetest-rqq9h" for this suite. Dec 17 11:58:57.481: INFO: Namespace e2e-tests-nsdeletetest-rqq9h was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-sxlpc" for this suite. Dec 17 11:59:05.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:59:05.684: INFO: namespace: e2e-tests-nsdeletetest-sxlpc, resource: bindings, ignored listing per whitelist Dec 17 11:59:05.820: INFO: namespace e2e-tests-nsdeletetest-sxlpc deletion completed in 8.338856833s • [SLOW TEST:21.467 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:59:05.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 17 11:59:06.094: INFO: Waiting up to 5m0s for pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-jh6hp" to be "success or failure" Dec 17 11:59:06.111: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.786296ms Dec 17 11:59:08.331: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23707044s Dec 17 11:59:10.358: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263990663s Dec 17 11:59:12.392: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29855991s Dec 17 11:59:14.928: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834474646s Dec 17 11:59:16.956: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.862090805s STEP: Saw pod success Dec 17 11:59:16.956: INFO: Pod "downward-api-a0264373-20c4-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 11:59:16.967: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-a0264373-20c4-11ea-a5ef-0242ac110004 container dapi-container: STEP: delete the pod Dec 17 11:59:17.345: INFO: Waiting for pod downward-api-a0264373-20c4-11ea-a5ef-0242ac110004 to disappear Dec 17 11:59:17.654: INFO: Pod downward-api-a0264373-20c4-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:59:17.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jh6hp" for this suite. Dec 17 11:59:25.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:59:25.778: INFO: namespace: e2e-tests-downward-api-jh6hp, resource: bindings, ignored listing per whitelist Dec 17 11:59:26.054: INFO: namespace e2e-tests-downward-api-jh6hp deletion completed in 8.384621714s • [SLOW TEST:20.232 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:59:26.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 11:59:26.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 17 11:59:26.481: INFO: stderr: "" Dec 17 11:59:26.481: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 11:59:26.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-flf7w" for this suite. Dec 17 11:59:32.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 11:59:32.657: INFO: namespace: e2e-tests-kubectl-flf7w, resource: bindings, ignored listing per whitelist Dec 17 11:59:32.716: INFO: namespace e2e-tests-kubectl-flf7w deletion completed in 6.213162877s • [SLOW TEST:6.662 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 11:59:32.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zh9xg STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 17 11:59:33.036: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 17 12:00:17.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-zh9xg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 12:00:17.662: INFO: >>> kubeConfig: /root/.kube/config Dec 17 12:00:18.235: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:00:18.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zh9xg" for this suite. Dec 17 12:00:46.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:00:46.505: INFO: namespace: e2e-tests-pod-network-test-zh9xg, resource: bindings, ignored listing per whitelist Dec 17 12:00:46.650: INFO: namespace e2e-tests-pod-network-test-zh9xg deletion completed in 28.39595589s • [SLOW TEST:73.934 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:00:46.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 12:00:47.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-qzlhj" to be "success or failure" Dec 17 12:00:47.039: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.99712ms Dec 17 12:00:49.054: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026930663s Dec 17 12:00:51.076: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04928314s Dec 17 12:00:53.476: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448969623s Dec 17 12:00:55.492: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465167027s Dec 17 12:00:57.512: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.484827269s STEP: Saw pod success Dec 17 12:00:57.512: INFO: Pod "downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:00:57.518: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 12:00:57.747: INFO: Waiting for pod downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004 to disappear Dec 17 12:00:57.760: INFO: Pod downwardapi-volume-dc479f16-20c4-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:00:57.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qzlhj" for this suite. Dec 17 12:01:05.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:01:06.133: INFO: namespace: e2e-tests-downward-api-qzlhj, resource: bindings, ignored listing per whitelist Dec 17 12:01:06.225: INFO: namespace e2e-tests-downward-api-qzlhj deletion completed in 8.457243178s • [SLOW TEST:19.574 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:01:06.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e7da687b-20c4-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 12:01:06.471: INFO: Waiting up to 5m0s for pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-pb6dz" to be "success or failure" Dec 17 12:01:06.491: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.980444ms Dec 17 12:01:08.592: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120070564s Dec 17 12:01:10.627: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155260527s Dec 17 12:01:13.775: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.303757035s Dec 17 12:01:15.784: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.312596038s Dec 17 12:01:17.818: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.346915177s STEP: Saw pod success Dec 17 12:01:17.819: INFO: Pod "pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:01:17.883: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004 container secret-env-test: STEP: delete the pod Dec 17 12:01:18.132: INFO: Waiting for pod pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004 to disappear Dec 17 12:01:18.157: INFO: Pod pod-secrets-e7db9ef9-20c4-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:01:18.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pb6dz" for this suite. Dec 17 12:01:24.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:01:24.588: INFO: namespace: e2e-tests-secrets-pb6dz, resource: bindings, ignored listing per whitelist Dec 17 12:01:24.628: INFO: namespace e2e-tests-secrets-pb6dz deletion completed in 6.411950875s • [SLOW TEST:18.403 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:01:24.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Dec 17 12:01:24.806: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix301858108/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:01:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dlzlz" for this suite. Dec 17 12:01:30.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:01:31.135: INFO: namespace: e2e-tests-kubectl-dlzlz, resource: bindings, ignored listing per whitelist Dec 17 12:01:31.154: INFO: namespace e2e-tests-kubectl-dlzlz deletion completed in 6.201680002s • [SLOW TEST:6.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:01:31.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-f6b4b69b-20c4-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 12:01:31.345: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-t6d7k" to be "success or failure" Dec 17 12:01:31.360: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.900074ms Dec 17 12:01:33.375: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030021645s Dec 17 12:01:35.387: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042261593s Dec 17 12:01:37.804: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459229091s Dec 17 12:01:39.820: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.475331103s Dec 17 12:01:41.845: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.500805132s Dec 17 12:01:43.864: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.51921961s STEP: Saw pod success Dec 17 12:01:43.864: INFO: Pod "pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:01:43.882: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 17 12:01:44.270: INFO: Waiting for pod pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004 to disappear Dec 17 12:01:44.336: INFO: Pod pod-projected-secrets-f6b54d0b-20c4-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:01:44.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t6d7k" for this suite. Dec 17 12:01:50.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:01:50.750: INFO: namespace: e2e-tests-projected-t6d7k, resource: bindings, ignored listing per whitelist Dec 17 12:01:50.758: INFO: namespace e2e-tests-projected-t6d7k deletion completed in 6.261816304s • [SLOW TEST:19.603 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:01:50.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-02699f24-20c5-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume secrets Dec 17 12:01:50.952: INFO: Waiting up to 5m0s for pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-wmpsc" to be "success or failure" Dec 17 12:01:51.052: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 99.362644ms Dec 17 12:01:53.567: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.614643688s Dec 17 12:01:55.586: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634144267s Dec 17 12:01:57.709: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756837131s Dec 17 12:01:59.777: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82506045s Dec 17 12:02:01.800: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.847974516s STEP: Saw pod success Dec 17 12:02:01.800: INFO: Pod "pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:02:01.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 17 12:02:02.975: INFO: Waiting for pod pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:02:03.493: INFO: Pod pod-secrets-026ab0a6-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:02:03.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wmpsc" for this suite. Dec 17 12:02:09.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:02:10.006: INFO: namespace: e2e-tests-secrets-wmpsc, resource: bindings, ignored listing per whitelist Dec 17 12:02:10.006: INFO: namespace e2e-tests-secrets-wmpsc deletion completed in 6.484926855s • [SLOW TEST:19.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:02:10.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 12:02:10.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-z4ftk" to be "success or failure" Dec 17 12:02:10.249: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.779954ms Dec 17 12:02:12.325: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08805236s Dec 17 12:02:14.347: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109502543s Dec 17 12:02:16.791: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554431356s Dec 17 12:02:18.809: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.572164389s Dec 17 12:02:20.830: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.593105919s STEP: Saw pod success Dec 17 12:02:20.830: INFO: Pod "downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:02:20.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 12:02:21.096: INFO: Waiting for pod downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:02:21.322: INFO: Pod downwardapi-volume-0de86a2a-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:02:21.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z4ftk" for this suite. Dec 17 12:02:29.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:02:29.628: INFO: namespace: e2e-tests-projected-z4ftk, resource: bindings, ignored listing per whitelist Dec 17 12:02:29.649: INFO: namespace e2e-tests-projected-z4ftk deletion completed in 8.309045435s • [SLOW TEST:19.642 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:02:29.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-199e7094-20c5-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 12:02:29.899: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-zcmn9" to be "success or failure" Dec 17 12:02:30.045: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 145.761614ms Dec 17 12:02:32.062: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162731108s Dec 17 12:02:34.087: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187652518s Dec 17 12:02:36.445: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545680968s Dec 17 12:02:38.460: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561075197s Dec 17 12:02:40.495: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.596478275s STEP: Saw pod success Dec 17 12:02:40.496: INFO: Pod "pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:02:40.509: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 17 12:02:40.796: INFO: Waiting for pod pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:02:40.956: INFO: Pod pod-projected-configmaps-19a03b12-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:02:40.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zcmn9" for this suite. Dec 17 12:02:47.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:02:47.124: INFO: namespace: e2e-tests-projected-zcmn9, resource: bindings, ignored listing per whitelist Dec 17 12:02:47.200: INFO: namespace e2e-tests-projected-zcmn9 deletion completed in 6.227213657s • [SLOW TEST:17.551 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:02:47.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 17 12:02:47.410: INFO: namespace e2e-tests-kubectl-fvt7x Dec 17 12:02:47.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fvt7x' Dec 17 12:02:49.785: INFO: stderr: "" Dec 17 12:02:49.785: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 17 12:02:50.803: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:50.803: INFO: Found 0 / 1 Dec 17 12:02:51.822: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:51.822: INFO: Found 0 / 1 Dec 17 12:02:52.799: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:52.799: INFO: Found 0 / 1 Dec 17 12:02:53.816: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:53.816: INFO: Found 0 / 1 Dec 17 12:02:54.802: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:54.802: INFO: Found 0 / 1 Dec 17 12:02:56.123: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:56.123: INFO: Found 0 / 1 Dec 17 12:02:56.798: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:56.799: INFO: Found 0 / 1 Dec 17 12:02:57.806: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:57.806: INFO: Found 0 / 1 Dec 17 12:02:58.806: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:58.806: INFO: Found 0 / 1 Dec 17 12:02:59.809: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:59.809: INFO: Found 1 / 1 Dec 17 12:02:59.809: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 17 12:02:59.826: INFO: Selector matched 1 pods for map[app:redis] Dec 17 12:02:59.826: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 17 12:02:59.826: INFO: wait on redis-master startup in e2e-tests-kubectl-fvt7x Dec 17 12:02:59.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qmf6m redis-master --namespace=e2e-tests-kubectl-fvt7x' Dec 17 12:03:00.123: INFO: stderr: "" Dec 17 12:03:00.124: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Dec 12:02:57.720 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 12:02:57.720 # Server started, Redis version 3.2.12\n1:M 17 Dec 12:02:57.720 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 12:02:57.720 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 17 12:03:00.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-fvt7x' Dec 17 12:03:00.562: INFO: stderr: "" Dec 17 12:03:00.562: INFO: stdout: "service/rm2 exposed\n" Dec 17 12:03:00.582: INFO: Service rm2 in namespace e2e-tests-kubectl-fvt7x found. STEP: exposing service Dec 17 12:03:02.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-fvt7x' Dec 17 12:03:02.963: INFO: stderr: "" Dec 17 12:03:02.964: INFO: stdout: "service/rm3 exposed\n" Dec 17 12:03:03.020: INFO: Service rm3 in namespace e2e-tests-kubectl-fvt7x found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:03:05.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fvt7x" for this suite. Dec 17 12:03:33.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:03:33.681: INFO: namespace: e2e-tests-kubectl-fvt7x, resource: bindings, ignored listing per whitelist Dec 17 12:03:33.867: INFO: namespace e2e-tests-kubectl-fvt7x deletion completed in 28.718605976s • [SLOW TEST:46.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:03:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-p99x8 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-p99x8 STEP: Deleting pre-stop pod Dec 17 12:04:01.424: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:04:01.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-p99x8" for this suite. Dec 17 12:04:43.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:04:43.648: INFO: namespace: e2e-tests-prestop-p99x8, resource: bindings, ignored listing per whitelist Dec 17 12:04:43.690: INFO: namespace e2e-tests-prestop-p99x8 deletion completed in 42.227080859s • [SLOW TEST:69.821 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:04:43.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Dec 17 12:04:43.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:44.538: INFO: stderr: "" Dec 17 12:04:44.539: INFO: stdout: "pod/pause created\n" Dec 17 12:04:44.539: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 17 12:04:44.539: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-d997j" to be "running and ready" Dec 17 12:04:44.664: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 125.365801ms Dec 17 12:04:46.877: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338010568s Dec 17 12:04:48.897: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357956614s Dec 17 12:04:50.959: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420561362s Dec 17 12:04:52.968: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429496504s Dec 17 12:04:54.980: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.440587962s Dec 17 12:04:54.980: INFO: Pod "pause" satisfied condition "running and ready" Dec 17 12:04:54.980: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Dec 17 12:04:54.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:55.267: INFO: stderr: "" Dec 17 12:04:55.267: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 17 12:04:55.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:55.418: INFO: stderr: "" Dec 17 12:04:55.418: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 17 12:04:55.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:55.565: INFO: stderr: "" Dec 17 12:04:55.565: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 17 12:04:55.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:55.707: INFO: stderr: "" Dec 17 12:04:55.707: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Dec 17 12:04:55.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:55.897: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 12:04:55.897: INFO: stdout: "pod \"pause\" force deleted\n" Dec 17 12:04:55.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-d997j' Dec 17 12:04:56.165: INFO: stderr: "No resources found.\n" Dec 17 12:04:56.165: INFO: stdout: "" Dec 17 12:04:56.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-d997j -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 12:04:56.329: INFO: stderr: "" Dec 17 12:04:56.329: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:04:56.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d997j" for this suite. Dec 17 12:05:03.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:05:03.117: INFO: namespace: e2e-tests-kubectl-d997j, resource: bindings, ignored listing per whitelist Dec 17 12:05:03.367: INFO: namespace e2e-tests-kubectl-d997j deletion completed in 7.026128848s • [SLOW TEST:19.676 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:05:03.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 17 12:05:16.357: INFO: Successfully updated pod "labelsupdate754e0870-20c5-11ea-a5ef-0242ac110004" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:05:18.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rqpgh" for this suite. Dec 17 12:05:42.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:05:43.067: INFO: namespace: e2e-tests-downward-api-rqpgh, resource: bindings, ignored listing per whitelist Dec 17 12:05:43.076: INFO: namespace e2e-tests-downward-api-rqpgh deletion completed in 24.523449067s • [SLOW TEST:39.708 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:05:43.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Dec 17 12:05:43.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 17 12:05:43.604: INFO: stderr: "" Dec 17 12:05:43.604: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:05:43.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hjjd4" for this suite. Dec 17 12:05:49.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:05:49.879: INFO: namespace: e2e-tests-kubectl-hjjd4, resource: bindings, ignored listing per whitelist Dec 17 12:05:49.896: INFO: namespace e2e-tests-kubectl-hjjd4 deletion completed in 6.268939641s • [SLOW TEST:6.821 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:05:49.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:05:50.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5ztrm" for this suite. Dec 17 12:05:56.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:05:56.640: INFO: namespace: e2e-tests-kubelet-test-5ztrm, resource: bindings, ignored listing per whitelist Dec 17 12:05:56.712: INFO: namespace e2e-tests-kubelet-test-5ztrm deletion completed in 6.276529731s • [SLOW TEST:6.814 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:05:56.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 12:05:56.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-dj9f9" to be "success or failure" Dec 17 12:05:56.992: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.244271ms Dec 17 12:05:59.123: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148797165s Dec 17 12:06:01.206: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232465638s Dec 17 12:06:03.223: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249562845s Dec 17 12:06:05.243: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269520347s Dec 17 12:06:07.288: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314653184s Dec 17 12:06:09.467: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.493126598s STEP: Saw pod success Dec 17 12:06:09.467: INFO: Pod "downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:06:09.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 12:06:09.981: INFO: Waiting for pod downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:06:09.996: INFO: Pod downwardapi-volume-95095e3c-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:06:09.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dj9f9" for this suite. Dec 17 12:06:16.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:06:16.123: INFO: namespace: e2e-tests-downward-api-dj9f9, resource: bindings, ignored listing per whitelist Dec 17 12:06:16.327: INFO: namespace e2e-tests-downward-api-dj9f9 deletion completed in 6.322319735s • [SLOW TEST:19.615 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:06:16.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 17 12:06:16.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 17 12:06:16.634: INFO: Waiting for terminating namespaces to be deleted... Dec 17 12:06:16.637: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 17 12:06:16.651: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 17 12:06:16.651: INFO: Container coredns ready: true, restart count 0 Dec 17 12:06:16.651: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 17 12:06:16.651: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 17 12:06:16.651: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 17 12:06:16.651: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 17 12:06:16.651: INFO: Container coredns ready: true, restart count 0 Dec 17 12:06:16.651: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 17 12:06:16.651: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 12:06:16.651: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 17 12:06:16.651: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 17 12:06:16.651: INFO: Container weave ready: true, restart count 0 Dec 17 12:06:16.651: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e127233547c2fe], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:06:17.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-zv4vh" for this suite. Dec 17 12:06:23.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:06:24.015: INFO: namespace: e2e-tests-sched-pred-zv4vh, resource: bindings, ignored listing per whitelist Dec 17 12:06:24.066: INFO: namespace e2e-tests-sched-pred-zv4vh deletion completed in 6.274422694s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.738 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:06:24.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 17 12:06:24.259: INFO: Waiting up to 5m0s for pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-qp5bx" to be "success or failure" Dec 17 12:06:24.274: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.48873ms Dec 17 12:06:26.812: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552899577s Dec 17 12:06:28.839: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579334684s Dec 17 12:06:31.625: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.365242845s Dec 17 12:06:33.635: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.375997615s Dec 17 12:06:35.902: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.642236253s Dec 17 12:06:37.937: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.677184593s STEP: Saw pod success Dec 17 12:06:37.937: INFO: Pod "pod-a5507925-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:06:37.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a5507925-20c5-11ea-a5ef-0242ac110004 container test-container: STEP: delete the pod Dec 17 12:06:38.126: INFO: Waiting for pod pod-a5507925-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:06:38.140: INFO: Pod pod-a5507925-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:06:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qp5bx" for this suite. Dec 17 12:06:46.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:06:46.762: INFO: namespace: e2e-tests-emptydir-qp5bx, resource: bindings, ignored listing per whitelist Dec 17 12:06:46.873: INFO: namespace e2e-tests-emptydir-qp5bx deletion completed in 8.621279392s • [SLOW TEST:22.807 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:06:46.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b2f8f82d-20c5-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 12:06:47.292: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-bqf8x" to be "success or failure" Dec 17 12:06:47.318: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 25.499848ms Dec 17 12:06:49.351: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058931594s Dec 17 12:06:51.372: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07994951s Dec 17 12:06:53.642: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349501478s Dec 17 12:06:55.661: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368406892s Dec 17 12:06:57.670: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377924198s STEP: Saw pod success Dec 17 12:06:57.670: INFO: Pod "pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:06:57.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 17 12:06:59.077: INFO: Waiting for pod pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:06:59.408: INFO: Pod pod-projected-configmaps-b2fa3fa9-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:06:59.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bqf8x" for this suite. Dec 17 12:07:07.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:07:08.092: INFO: namespace: e2e-tests-projected-bqf8x, resource: bindings, ignored listing per whitelist Dec 17 12:07:08.164: INFO: namespace e2e-tests-projected-bqf8x deletion completed in 8.731418539s • [SLOW TEST:21.291 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:07:08.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-bf9c4e8f-20c5-11ea-a5ef-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 17 12:07:08.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-vszbl" to be "success or failure" Dec 17 12:07:08.385: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828739ms Dec 17 12:07:10.485: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104809826s Dec 17 12:07:12.548: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167701256s Dec 17 12:07:15.115: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734589335s Dec 17 12:07:17.234: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.853625422s Dec 17 12:07:19.250: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869477113s Dec 17 12:07:21.608: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.228119084s STEP: Saw pod success Dec 17 12:07:21.608: INFO: Pod "pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:07:21.628: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 17 12:07:21.769: INFO: Waiting for pod pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:07:21.778: INFO: Pod pod-projected-configmaps-bf9d86d3-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:07:21.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vszbl" for this suite. Dec 17 12:07:27.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:07:28.006: INFO: namespace: e2e-tests-projected-vszbl, resource: bindings, ignored listing per whitelist Dec 17 12:07:28.148: INFO: namespace e2e-tests-projected-vszbl deletion completed in 6.345677359s • [SLOW TEST:19.983 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:07:28.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:07:38.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4tdhs" for this suite. Dec 17 12:08:24.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:08:25.012: INFO: namespace: e2e-tests-kubelet-test-4tdhs, resource: bindings, ignored listing per whitelist Dec 17 12:08:25.084: INFO: namespace e2e-tests-kubelet-test-4tdhs deletion completed in 46.408344373s • [SLOW TEST:56.936 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:08:25.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 17 12:08:25.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-bpr6z" to be "success or failure" Dec 17 12:08:25.334: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.696192ms Dec 17 12:08:27.356: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043078725s Dec 17 12:08:29.454: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141377528s Dec 17 12:08:31.477: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164042901s Dec 17 12:08:33.507: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194479096s Dec 17 12:08:35.523: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210829316s Dec 17 12:08:37.535: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.222139427s STEP: Saw pod success Dec 17 12:08:37.535: INFO: Pod "downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004" satisfied condition "success or failure" Dec 17 12:08:37.539: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004 container client-container: STEP: delete the pod Dec 17 12:08:38.988: INFO: Waiting for pod downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004 to disappear Dec 17 12:08:39.679: INFO: Pod downwardapi-volume-ed6da659-20c5-11ea-a5ef-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:08:39.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bpr6z" for this suite. Dec 17 12:08:48.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:08:48.501: INFO: namespace: e2e-tests-downward-api-bpr6z, resource: bindings, ignored listing per whitelist Dec 17 12:08:48.592: INFO: namespace e2e-tests-downward-api-bpr6z deletion completed in 8.716644583s • [SLOW TEST:23.507 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:08:48.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-fb6ced73-20c5-11ea-a5ef-0242ac110004 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-fb6ced73-20c5-11ea-a5ef-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 17 12:09:01.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hvqcz" for this suite. Dec 17 12:09:25.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:09:25.424: INFO: namespace: e2e-tests-projected-hvqcz, resource: bindings, ignored listing per whitelist Dec 17 12:09:25.447: INFO: namespace e2e-tests-projected-hvqcz deletion completed in 24.254112296s • [SLOW TEST:36.854 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 17 12:09:25.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 17 12:09:25.675: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.828503ms)
Dec 17 12:09:25.680: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.794892ms)
Dec 17 12:09:25.684: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.431499ms)
Dec 17 12:09:25.689: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.214669ms)
Dec 17 12:09:25.695: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.277245ms)
Dec 17 12:09:25.702: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.059177ms)
Dec 17 12:09:25.749: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 47.584624ms)
Dec 17 12:09:25.758: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.996959ms)
Dec 17 12:09:25.767: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.717737ms)
Dec 17 12:09:25.775: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.682421ms)
Dec 17 12:09:25.783: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.95073ms)
Dec 17 12:09:25.792: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.54554ms)
Dec 17 12:09:25.800: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.045912ms)
Dec 17 12:09:25.809: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.919516ms)
Dec 17 12:09:25.816: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.527069ms)
Dec 17 12:09:25.830: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.378503ms)
Dec 17 12:09:25.839: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.467252ms)
Dec 17 12:09:25.856: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.604824ms)
Dec 17 12:09:25.880: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.905259ms)
Dec 17 12:09:25.896: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.970674ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:09:25.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-99ckk" for this suite.
Dec 17 12:09:31.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:09:32.093: INFO: namespace: e2e-tests-proxy-99ckk, resource: bindings, ignored listing per whitelist
Dec 17 12:09:32.096: INFO: namespace e2e-tests-proxy-99ckk deletion completed in 6.19187626s

• [SLOW TEST:6.649 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:09:32.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 17 12:09:32.360: INFO: Waiting up to 5m0s for pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004" in namespace "e2e-tests-containers-gqq2f" to be "success or failure"
Dec 17 12:09:32.433: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 73.213825ms
Dec 17 12:09:34.700: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34062466s
Dec 17 12:09:36.723: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363046494s
Dec 17 12:09:39.242: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88254665s
Dec 17 12:09:41.351: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.991617403s
Dec 17 12:09:43.382: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.022137023s
Dec 17 12:09:45.527: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.166950714s
STEP: Saw pod success
Dec 17 12:09:45.527: INFO: Pod "client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:09:45.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:09:45.903: INFO: Waiting for pod client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:09:45.962: INFO: Pod client-containers-1561ac29-20c6-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:09:45.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gqq2f" for this suite.
Dec 17 12:09:52.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:09:52.087: INFO: namespace: e2e-tests-containers-gqq2f, resource: bindings, ignored listing per whitelist
Dec 17 12:09:52.350: INFO: namespace e2e-tests-containers-gqq2f deletion completed in 6.376966571s

• [SLOW TEST:20.253 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:09:52.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 17 12:12:55.972: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:12:56.019: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:12:58.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:12:58.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:00.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:00.038: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:02.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:02.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:04.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:04.074: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:06.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:06.049: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:08.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:08.044: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:10.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:10.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:12.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:12.124: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:14.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:14.051: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:16.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:16.047: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:18.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:18.043: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:20.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:20.034: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:22.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:22.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:24.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:24.033: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:26.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:26.036: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:28.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:28.045: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:30.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:30.035: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:32.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:32.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:34.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:34.039: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:36.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:36.049: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:38.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:38.042: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:40.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:40.037: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:42.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:42.042: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:44.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:44.069: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:46.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:46.029: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:48.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:48.028: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:50.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:50.041: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:52.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:52.038: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:54.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:54.035: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:56.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:56.042: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:13:58.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:13:58.029: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:00.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:00.041: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:02.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:02.039: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:04.021: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:04.039: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:06.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:06.035: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:08.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:08.082: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:10.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:10.038: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:12.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:12.045: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:14.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:14.045: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:16.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:16.042: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:18.021: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:18.141: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:20.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:20.050: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:22.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:22.043: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:24.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:24.043: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:26.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:26.062: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:28.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:28.232: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:30.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:30.034: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:32.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:32.038: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:34.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:34.047: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 12:14:36.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 12:14:36.037: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:14:36.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7n47g" for this suite.
Dec 17 12:15:00.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:15:00.197: INFO: namespace: e2e-tests-container-lifecycle-hook-7n47g, resource: bindings, ignored listing per whitelist
Dec 17 12:15:00.276: INFO: namespace e2e-tests-container-lifecycle-hook-7n47g deletion completed in 24.212151216s

• [SLOW TEST:307.926 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:15:00.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 17 12:15:00.774: INFO: Waiting up to 5m0s for pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-clzn6" to be "success or failure"
Dec 17 12:15:00.896: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 122.590452ms
Dec 17 12:15:02.916: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142234373s
Dec 17 12:15:04.934: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159956105s
Dec 17 12:15:07.356: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581816412s
Dec 17 12:15:09.401: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626854831s
Dec 17 12:15:11.481: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.706947263s
STEP: Saw pod success
Dec 17 12:15:11.481: INFO: Pod "pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:15:11.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:15:12.841: INFO: Waiting for pod pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:15:12.858: INFO: Pod pod-d92bf2cb-20c6-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:15:12.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-clzn6" for this suite.
Dec 17 12:15:19.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:15:19.076: INFO: namespace: e2e-tests-emptydir-clzn6, resource: bindings, ignored listing per whitelist
Dec 17 12:15:19.197: INFO: namespace e2e-tests-emptydir-clzn6 deletion completed in 6.327326334s

• [SLOW TEST:18.920 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:15:19.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e456c2ae-20c6-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:15:19.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-s4784" to be "success or failure"
Dec 17 12:15:19.744: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 242.570305ms
Dec 17 12:15:21.767: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265124371s
Dec 17 12:15:23.793: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292017446s
Dec 17 12:15:25.844: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342719112s
Dec 17 12:15:27.883: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381441319s
Dec 17 12:15:29.987: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.485791138s
Dec 17 12:15:32.000: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.498927068s
STEP: Saw pod success
Dec 17 12:15:32.000: INFO: Pod "pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:15:32.007: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 12:15:32.276: INFO: Waiting for pod pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:15:32.292: INFO: Pod pod-projected-configmaps-e458350a-20c6-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:15:32.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s4784" for this suite.
Dec 17 12:15:38.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:15:38.519: INFO: namespace: e2e-tests-projected-s4784, resource: bindings, ignored listing per whitelist
Dec 17 12:15:38.640: INFO: namespace e2e-tests-projected-s4784 deletion completed in 6.341565588s

• [SLOW TEST:19.443 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:15:38.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 17 12:15:39.010: INFO: Waiting up to 5m0s for pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004" in namespace "e2e-tests-var-expansion-g4969" to be "success or failure"
Dec 17 12:15:39.017: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6534ms
Dec 17 12:15:41.026: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016008985s
Dec 17 12:15:43.057: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046413831s
Dec 17 12:15:45.286: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275463783s
Dec 17 12:15:47.305: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.29500064s
Dec 17 12:15:49.321: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.31121582s
STEP: Saw pod success
Dec 17 12:15:49.322: INFO: Pod "var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:15:49.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 17 12:15:50.583: INFO: Waiting for pod var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:15:50.651: INFO: Pod var-expansion-efeddfc8-20c6-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:15:50.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-g4969" for this suite.
Dec 17 12:15:56.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:15:56.960: INFO: namespace: e2e-tests-var-expansion-g4969, resource: bindings, ignored listing per whitelist
Dec 17 12:15:57.032: INFO: namespace e2e-tests-var-expansion-g4969 deletion completed in 6.270979717s

• [SLOW TEST:18.391 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:15:57.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 17 12:15:57.381: INFO: Waiting up to 5m0s for pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-cf59h" to be "success or failure"
Dec 17 12:15:57.589: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 207.250479ms
Dec 17 12:15:59.604: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222757803s
Dec 17 12:16:01.628: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246227236s
Dec 17 12:16:03.855: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473553474s
Dec 17 12:16:05.877: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49592595s
Dec 17 12:16:07.886: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504706358s
STEP: Saw pod success
Dec 17 12:16:07.886: INFO: Pod "pod-faea00e0-20c6-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:16:07.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-faea00e0-20c6-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:16:08.593: INFO: Waiting for pod pod-faea00e0-20c6-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:16:08.859: INFO: Pod pod-faea00e0-20c6-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:16:08.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cf59h" for this suite.
Dec 17 12:16:14.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:16:15.037: INFO: namespace: e2e-tests-emptydir-cf59h, resource: bindings, ignored listing per whitelist
Dec 17 12:16:15.273: INFO: namespace e2e-tests-emptydir-cf59h deletion completed in 6.387583594s

• [SLOW TEST:18.240 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:16:15.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:16:15.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:16:28.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-m6lnm" for this suite.
Dec 17 12:17:16.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:17:16.373: INFO: namespace: e2e-tests-pods-m6lnm, resource: bindings, ignored listing per whitelist
Dec 17 12:17:16.387: INFO: namespace e2e-tests-pods-m6lnm deletion completed in 48.262431224s

• [SLOW TEST:61.114 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:17:16.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 12:17:16.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-2vfwx" to be "success or failure"
Dec 17 12:17:16.757: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 61.968216ms
Dec 17 12:17:18.980: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284689008s
Dec 17 12:17:21.008: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312051967s
Dec 17 12:17:23.923: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.22771147s
Dec 17 12:17:25.955: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.259527852s
Dec 17 12:17:27.970: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.274937515s
STEP: Saw pod success
Dec 17 12:17:27.971: INFO: Pod "downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:17:27.976: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 12:17:28.771: INFO: Waiting for pod downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:17:28.789: INFO: Pod downwardapi-volume-2a3098e2-20c7-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:17:28.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2vfwx" for this suite.
Dec 17 12:17:36.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:17:37.036: INFO: namespace: e2e-tests-projected-2vfwx, resource: bindings, ignored listing per whitelist
Dec 17 12:17:37.063: INFO: namespace e2e-tests-projected-2vfwx deletion completed in 8.262399644s

• [SLOW TEST:20.676 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:17:37.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 17 12:17:37.298: INFO: Waiting up to 5m0s for pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-95t9k" to be "success or failure"
Dec 17 12:17:37.337: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.441367ms
Dec 17 12:17:39.411: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112995685s
Dec 17 12:17:41.433: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135223097s
Dec 17 12:17:43.776: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477616285s
Dec 17 12:17:45.792: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494484237s
Dec 17 12:17:47.816: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.517757836s
STEP: Saw pod success
Dec 17 12:17:47.816: INFO: Pod "pod-367bb1a3-20c7-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:17:47.835: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-367bb1a3-20c7-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:17:48.943: INFO: Waiting for pod pod-367bb1a3-20c7-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:17:48.999: INFO: Pod pod-367bb1a3-20c7-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:17:49.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-95t9k" for this suite.
Dec 17 12:17:55.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:17:55.400: INFO: namespace: e2e-tests-emptydir-95t9k, resource: bindings, ignored listing per whitelist
Dec 17 12:17:55.443: INFO: namespace e2e-tests-emptydir-95t9k deletion completed in 6.37321093s

• [SLOW TEST:18.380 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:17:55.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-qn67
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 12:17:56.672: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qn67" in namespace "e2e-tests-subpath-cdnqx" to be "success or failure"
Dec 17 12:17:56.791: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 119.117469ms
Dec 17 12:17:58.811: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138876502s
Dec 17 12:18:00.827: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155059474s
Dec 17 12:18:03.290: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618149876s
Dec 17 12:18:05.304: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.632280925s
Dec 17 12:18:07.317: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645616259s
Dec 17 12:18:09.337: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 12.664986777s
Dec 17 12:18:11.366: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 14.694471433s
Dec 17 12:18:13.434: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Pending", Reason="", readiness=false. Elapsed: 16.762513622s
Dec 17 12:18:15.499: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 18.827097908s
Dec 17 12:18:17.530: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 20.858169612s
Dec 17 12:18:19.580: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 22.9085703s
Dec 17 12:18:21.600: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 24.928005302s
Dec 17 12:18:23.621: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 26.948857045s
Dec 17 12:18:25.645: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 28.97313307s
Dec 17 12:18:27.665: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 30.993271855s
Dec 17 12:18:29.681: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 33.009571972s
Dec 17 12:18:31.704: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Running", Reason="", readiness=false. Elapsed: 35.032075856s
Dec 17 12:18:34.451: INFO: Pod "pod-subpath-test-configmap-qn67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.779741962s
STEP: Saw pod success
Dec 17 12:18:34.452: INFO: Pod "pod-subpath-test-configmap-qn67" satisfied condition "success or failure"
Dec 17 12:18:34.463: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-qn67 container test-container-subpath-configmap-qn67: 
STEP: delete the pod
Dec 17 12:18:35.006: INFO: Waiting for pod pod-subpath-test-configmap-qn67 to disappear
Dec 17 12:18:35.087: INFO: Pod pod-subpath-test-configmap-qn67 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qn67
Dec 17 12:18:35.087: INFO: Deleting pod "pod-subpath-test-configmap-qn67" in namespace "e2e-tests-subpath-cdnqx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:18:35.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-cdnqx" for this suite.
Dec 17 12:18:41.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:18:41.220: INFO: namespace: e2e-tests-subpath-cdnqx, resource: bindings, ignored listing per whitelist
Dec 17 12:18:41.426: INFO: namespace e2e-tests-subpath-cdnqx deletion completed in 6.316692548s

• [SLOW TEST:45.983 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:18:41.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 17 12:18:41.721: INFO: Waiting up to 5m0s for pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-rzklj" to be "success or failure"
Dec 17 12:18:41.757: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 36.46099ms
Dec 17 12:18:43.811: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090788586s
Dec 17 12:18:45.837: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116214276s
Dec 17 12:18:48.071: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349859153s
Dec 17 12:18:50.085: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.364304975s
Dec 17 12:18:52.101: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.380536343s
STEP: Saw pod success
Dec 17 12:18:52.101: INFO: Pod "pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:18:52.112: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:18:52.286: INFO: Waiting for pod pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:18:52.319: INFO: Pod pod-5cdd6a9c-20c7-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:18:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rzklj" for this suite.
Dec 17 12:18:58.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:18:58.710: INFO: namespace: e2e-tests-emptydir-rzklj, resource: bindings, ignored listing per whitelist
Dec 17 12:18:58.787: INFO: namespace e2e-tests-emptydir-rzklj deletion completed in 6.363717139s

• [SLOW TEST:17.360 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:18:58.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:18:59.152: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 17 12:19:04.198: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 12:19:10.328: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 17 12:19:10.562: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hm7c4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hm7c4/deployments/test-cleanup-deployment,UID:6df6276b-20c7-11ea-a994-fa163e34d433,ResourceVersion:15122004,Generation:1,CreationTimestamp:2019-12-17 12:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 17 12:19:10.582: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Dec 17 12:19:10.582: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 17 12:19:10.584: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hm7c4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hm7c4/replicasets/test-cleanup-controller,UID:67325052-20c7-11ea-a994-fa163e34d433,ResourceVersion:15122006,Generation:1,CreationTimestamp:2019-12-17 12:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6df6276b-20c7-11ea-a994-fa163e34d433 0xc0019e4767 0xc0019e4768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 17 12:19:10.605: INFO: Pod "test-cleanup-controller-mzs27" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mzs27,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hm7c4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hm7c4/pods/test-cleanup-controller-mzs27,UID:6748f3a7-20c7-11ea-a994-fa163e34d433,ResourceVersion:15122001,Generation:0,CreationTimestamp:2019-12-17 12:18:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 67325052-20c7-11ea-a994-fa163e34d433 0xc00176ecb7 0xc00176ecb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tc247 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tc247,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tc247 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00176eda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00176ee20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:19:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:19:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:18:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-17 12:18:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 12:19:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://75c6f6dc1263a3a6bc007725bcd7059872a95a9a7341ea59b4b88444539a8d93}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:19:10.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hm7c4" for this suite.
Dec 17 12:19:21.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:19:21.460: INFO: namespace: e2e-tests-deployment-hm7c4, resource: bindings, ignored listing per whitelist
Dec 17 12:19:21.605: INFO: namespace e2e-tests-deployment-hm7c4 deletion completed in 10.78672649s

• [SLOW TEST:22.819 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:19:21.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-75059565-20c7-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:19:23.540: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-4pwjt" to be "success or failure"
Dec 17 12:19:24.308: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 768.312978ms
Dec 17 12:19:26.329: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.789000317s
Dec 17 12:19:28.362: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82258965s
Dec 17 12:19:31.351: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.811104513s
Dec 17 12:19:33.360: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.820080397s
Dec 17 12:19:35.382: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.84270706s
STEP: Saw pod success
Dec 17 12:19:35.383: INFO: Pod "pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:19:35.393: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 12:19:35.652: INFO: Waiting for pod pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:19:36.676: INFO: Pod pod-projected-configmaps-759c3a0e-20c7-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:19:36.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4pwjt" for this suite.
Dec 17 12:19:43.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:19:43.081: INFO: namespace: e2e-tests-projected-4pwjt, resource: bindings, ignored listing per whitelist
Dec 17 12:19:43.177: INFO: namespace e2e-tests-projected-4pwjt deletion completed in 6.472108285s

• [SLOW TEST:21.570 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:19:43.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 17 12:19:43.504: INFO: Waiting up to 5m0s for pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-d5x7v" to be "success or failure"
Dec 17 12:19:43.566: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 61.785702ms
Dec 17 12:19:45.584: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080170397s
Dec 17 12:19:47.643: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139199666s
Dec 17 12:19:49.712: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207656738s
Dec 17 12:19:51.785: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.281372958s
Dec 17 12:19:53.820: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.315716943s
Dec 17 12:19:55.875: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.370748069s
STEP: Saw pod success
Dec 17 12:19:55.875: INFO: Pod "pod-81a845f7-20c7-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:19:55.890: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-81a845f7-20c7-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:19:56.061: INFO: Waiting for pod pod-81a845f7-20c7-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:19:56.070: INFO: Pod pod-81a845f7-20c7-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:19:56.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d5x7v" for this suite.
Dec 17 12:20:02.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:20:02.304: INFO: namespace: e2e-tests-emptydir-d5x7v, resource: bindings, ignored listing per whitelist
Dec 17 12:20:02.310: INFO: namespace e2e-tests-emptydir-d5x7v deletion completed in 6.226681017s

• [SLOW TEST:19.133 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:20:02.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dfkr4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dfkr4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 12:20:18.835: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.882: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.922: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.941: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.950: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.968: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:18.989: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.000: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.026: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.041: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.188: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.201: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.209: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.216: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.222: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.235: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.250: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.261: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.267: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.272: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004)
Dec 17 12:20:19.272: INFO: Lookups using e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dfkr4.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 12:20:24.411: INFO: DNS probes using e2e-tests-dns-dfkr4/dns-test-8d16675f-20c7-11ea-a5ef-0242ac110004 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:20:24.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-dfkr4" for this suite.
Dec 17 12:20:32.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:20:32.733: INFO: namespace: e2e-tests-dns-dfkr4, resource: bindings, ignored listing per whitelist
Dec 17 12:20:32.817: INFO: namespace e2e-tests-dns-dfkr4 deletion completed in 8.320316303s

• [SLOW TEST:30.507 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:20:32.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-gl8j
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 12:20:33.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gl8j" in namespace "e2e-tests-subpath-w9wql" to be "success or failure"
Dec 17 12:20:33.462: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 205.626791ms
Dec 17 12:20:35.534: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277714009s
Dec 17 12:20:37.548: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291529439s
Dec 17 12:20:39.737: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480652506s
Dec 17 12:20:41.786: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529287943s
Dec 17 12:20:43.802: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.545932287s
Dec 17 12:20:45.815: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.559118751s
Dec 17 12:20:47.919: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.662805031s
Dec 17 12:20:49.946: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 16.689719428s
Dec 17 12:20:51.977: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 18.720852146s
Dec 17 12:20:54.015: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 20.758402966s
Dec 17 12:20:56.029: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 22.772215714s
Dec 17 12:20:58.050: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 24.793580265s
Dec 17 12:21:00.064: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 26.807842159s
Dec 17 12:21:02.096: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 28.840195854s
Dec 17 12:21:04.115: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 30.85903543s
Dec 17 12:21:06.142: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Running", Reason="", readiness=false. Elapsed: 32.886074814s
Dec 17 12:21:08.171: INFO: Pod "pod-subpath-test-secret-gl8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.914914396s
STEP: Saw pod success
Dec 17 12:21:08.171: INFO: Pod "pod-subpath-test-secret-gl8j" satisfied condition "success or failure"
Dec 17 12:21:08.186: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gl8j container test-container-subpath-secret-gl8j: 
STEP: delete the pod
Dec 17 12:21:08.293: INFO: Waiting for pod pod-subpath-test-secret-gl8j to disappear
Dec 17 12:21:08.307: INFO: Pod pod-subpath-test-secret-gl8j no longer exists
STEP: Deleting pod pod-subpath-test-secret-gl8j
Dec 17 12:21:08.307: INFO: Deleting pod "pod-subpath-test-secret-gl8j" in namespace "e2e-tests-subpath-w9wql"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:21:08.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-w9wql" for this suite.
Dec 17 12:21:16.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:21:16.618: INFO: namespace: e2e-tests-subpath-w9wql, resource: bindings, ignored listing per whitelist
Dec 17 12:21:16.647: INFO: namespace e2e-tests-subpath-w9wql deletion completed in 8.327054578s

• [SLOW TEST:43.828 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:21:16.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 17 12:21:16.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dfprq'
Dec 17 12:21:19.539: INFO: stderr: ""
Dec 17 12:21:19.539: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 17 12:21:21.308: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:21.308: INFO: Found 0 / 1
Dec 17 12:21:21.696: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:21.697: INFO: Found 0 / 1
Dec 17 12:21:22.587: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:22.588: INFO: Found 0 / 1
Dec 17 12:21:23.565: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:23.566: INFO: Found 0 / 1
Dec 17 12:21:24.571: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:24.571: INFO: Found 0 / 1
Dec 17 12:21:25.622: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:25.622: INFO: Found 0 / 1
Dec 17 12:21:26.686: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:26.686: INFO: Found 0 / 1
Dec 17 12:21:27.560: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:27.560: INFO: Found 0 / 1
Dec 17 12:21:28.582: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:28.582: INFO: Found 0 / 1
Dec 17 12:21:29.611: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:29.612: INFO: Found 1 / 1
Dec 17 12:21:29.612: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 17 12:21:29.621: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 12:21:29.621: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 17 12:21:29.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq'
Dec 17 12:21:29.940: INFO: stderr: ""
Dec 17 12:21:29.941: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Dec 12:21:27.437 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 12:21:27.437 # Server started, Redis version 3.2.12\n1:M 17 Dec 12:21:27.437 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 12:21:27.437 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 17 12:21:29.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq --tail=1'
Dec 17 12:21:30.131: INFO: stderr: ""
Dec 17 12:21:30.131: INFO: stdout: "1:M 17 Dec 12:21:27.437 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 17 12:21:30.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq --limit-bytes=1'
Dec 17 12:21:30.286: INFO: stderr: ""
Dec 17 12:21:30.286: INFO: stdout: " "
STEP: exposing timestamps
Dec 17 12:21:30.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq --tail=1 --timestamps'
Dec 17 12:21:30.427: INFO: stderr: ""
Dec 17 12:21:30.427: INFO: stdout: "2019-12-17T12:21:27.439191096Z 1:M 17 Dec 12:21:27.437 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 17 12:21:32.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq --since=1s'
Dec 17 12:21:33.187: INFO: stderr: ""
Dec 17 12:21:33.187: INFO: stdout: ""
Dec 17 12:21:33.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vbr2w redis-master --namespace=e2e-tests-kubectl-dfprq --since=24h'
Dec 17 12:21:33.354: INFO: stderr: ""
Dec 17 12:21:33.354: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Dec 12:21:27.437 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 12:21:27.437 # Server started, Redis version 3.2.12\n1:M 17 Dec 12:21:27.437 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 12:21:27.437 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 17 12:21:33.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dfprq'
Dec 17 12:21:33.474: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 12:21:33.475: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 17 12:21:33.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-dfprq'
Dec 17 12:21:33.660: INFO: stderr: "No resources found.\n"
Dec 17 12:21:33.661: INFO: stdout: ""
Dec 17 12:21:33.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-dfprq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 17 12:21:33.899: INFO: stderr: ""
Dec 17 12:21:33.899: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:21:33.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dfprq" for this suite.
Dec 17 12:21:57.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:21:58.154: INFO: namespace: e2e-tests-kubectl-dfprq, resource: bindings, ignored listing per whitelist
Dec 17 12:21:58.173: INFO: namespace e2e-tests-kubectl-dfprq deletion completed in 24.254523507s

• [SLOW TEST:41.525 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:21:58.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1217 12:22:09.280356       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 12:22:09.280: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:22:09.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-k44dd" for this suite.
Dec 17 12:22:15.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:22:15.705: INFO: namespace: e2e-tests-gc-k44dd, resource: bindings, ignored listing per whitelist
Dec 17 12:22:15.729: INFO: namespace e2e-tests-gc-k44dd deletion completed in 6.442724592s

• [SLOW TEST:17.557 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:22:15.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 17 12:22:15.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:16.460: INFO: stderr: ""
Dec 17 12:22:16.460: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 12:22:16.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:16.688: INFO: stderr: ""
Dec 17 12:22:16.688: INFO: stdout: "update-demo-nautilus-s2gff update-demo-nautilus-vgfsl "
Dec 17 12:22:16.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2gff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:16.801: INFO: stderr: ""
Dec 17 12:22:16.802: INFO: stdout: ""
Dec 17 12:22:16.802: INFO: update-demo-nautilus-s2gff is created but not running
Dec 17 12:22:21.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:22.091: INFO: stderr: ""
Dec 17 12:22:22.091: INFO: stdout: "update-demo-nautilus-s2gff update-demo-nautilus-vgfsl "
Dec 17 12:22:22.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2gff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:22.258: INFO: stderr: ""
Dec 17 12:22:22.259: INFO: stdout: ""
Dec 17 12:22:22.259: INFO: update-demo-nautilus-s2gff is created but not running
Dec 17 12:22:27.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:27.625: INFO: stderr: ""
Dec 17 12:22:27.625: INFO: stdout: "update-demo-nautilus-s2gff update-demo-nautilus-vgfsl "
Dec 17 12:22:27.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2gff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:27.801: INFO: stderr: ""
Dec 17 12:22:27.801: INFO: stdout: ""
Dec 17 12:22:27.801: INFO: update-demo-nautilus-s2gff is created but not running
Dec 17 12:22:32.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.007: INFO: stderr: ""
Dec 17 12:22:33.007: INFO: stdout: "update-demo-nautilus-s2gff update-demo-nautilus-vgfsl "
Dec 17 12:22:33.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2gff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.126: INFO: stderr: ""
Dec 17 12:22:33.126: INFO: stdout: "true"
Dec 17 12:22:33.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2gff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.259: INFO: stderr: ""
Dec 17 12:22:33.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 12:22:33.259: INFO: validating pod update-demo-nautilus-s2gff
Dec 17 12:22:33.289: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 12:22:33.289: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 12:22:33.289: INFO: update-demo-nautilus-s2gff is verified up and running
Dec 17 12:22:33.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgfsl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.421: INFO: stderr: ""
Dec 17 12:22:33.421: INFO: stdout: "true"
Dec 17 12:22:33.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgfsl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.514: INFO: stderr: ""
Dec 17 12:22:33.514: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 12:22:33.514: INFO: validating pod update-demo-nautilus-vgfsl
Dec 17 12:22:33.534: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 12:22:33.534: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 12:22:33.534: INFO: update-demo-nautilus-vgfsl is verified up and running
STEP: using delete to clean up resources
Dec 17 12:22:33.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:33.693: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 12:22:33.694: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 17 12:22:33.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-nhhpg'
Dec 17 12:22:34.129: INFO: stderr: "No resources found.\n"
Dec 17 12:22:34.129: INFO: stdout: ""
Dec 17 12:22:34.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-nhhpg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 17 12:22:34.350: INFO: stderr: ""
Dec 17 12:22:34.350: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:22:34.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nhhpg" for this suite.
Dec 17 12:22:58.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:22:58.622: INFO: namespace: e2e-tests-kubectl-nhhpg, resource: bindings, ignored listing per whitelist
Dec 17 12:22:58.656: INFO: namespace e2e-tests-kubectl-nhhpg deletion completed in 24.284159123s

• [SLOW TEST:42.926 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:22:58.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1217 12:23:13.132852       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 12:23:13.132: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:23:13.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m8z9c" for this suite.
Dec 17 12:23:45.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:23:45.381: INFO: namespace: e2e-tests-gc-m8z9c, resource: bindings, ignored listing per whitelist
Dec 17 12:23:45.458: INFO: namespace e2e-tests-gc-m8z9c deletion completed in 32.317821911s

• [SLOW TEST:46.800 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:23:45.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-1206c106-20c8-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:23:45.728: INFO: Waiting up to 5m0s for pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-czj7m" to be "success or failure"
Dec 17 12:23:45.737: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188359ms
Dec 17 12:23:47.793: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064998304s
Dec 17 12:23:49.815: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086422514s
Dec 17 12:23:52.018: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289780397s
Dec 17 12:23:54.084: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35622312s
Dec 17 12:23:56.124: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.395524296s
STEP: Saw pod success
Dec 17 12:23:56.124: INFO: Pod "pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:23:56.154: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 17 12:23:56.539: INFO: Waiting for pod pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:23:56.564: INFO: Pod pod-configmaps-12131d48-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:23:56.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-czj7m" for this suite.
Dec 17 12:24:02.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:24:02.743: INFO: namespace: e2e-tests-configmap-czj7m, resource: bindings, ignored listing per whitelist
Dec 17 12:24:02.918: INFO: namespace e2e-tests-configmap-czj7m deletion completed in 6.272529935s

• [SLOW TEST:17.460 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:24:02.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 17 12:24:13.262: INFO: Pod pod-hostip-1c766372-20c8-11ea-a5ef-0242ac110004 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:24:13.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jvx69" for this suite.
Dec 17 12:24:37.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:24:37.487: INFO: namespace: e2e-tests-pods-jvx69, resource: bindings, ignored listing per whitelist
Dec 17 12:24:37.539: INFO: namespace e2e-tests-pods-jvx69 deletion completed in 24.268563503s

• [SLOW TEST:34.620 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:24:37.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 17 12:24:37.700: INFO: PodSpec: initContainers in spec.initContainers
Dec 17 12:25:51.149: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3111914b-20c8-11ea-a5ef-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-t8dwq", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-t8dwq/pods/pod-init-3111914b-20c8-11ea-a5ef-0242ac110004", UID:"3112a2fc-20c8-11ea-a994-fa163e34d433", ResourceVersion:"15122937", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712182277, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"700443541"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-46g6v", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001616740), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-46g6v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-46g6v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-46g6v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001afd888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001afe120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001afd900)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001afd920)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001afd928), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001afd92c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182278, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182278, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182278, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182277, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0013f8220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000464850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ac54c3d33ad0de49a3076fc9957a3b0dea9f4dbc3f6c561254bff0e9aa5070cd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013f8260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013f8240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:25:51.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-t8dwq" for this suite.
Dec 17 12:26:15.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:26:15.312: INFO: namespace: e2e-tests-init-container-t8dwq, resource: bindings, ignored listing per whitelist
Dec 17 12:26:15.457: INFO: namespace e2e-tests-init-container-t8dwq deletion completed in 24.284874451s

• [SLOW TEST:97.917 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:26:15.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 17 12:26:15.775: INFO: Waiting up to 5m0s for pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-containers-vw55h" to be "success or failure"
Dec 17 12:26:15.782: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484411ms
Dec 17 12:26:17.795: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019095975s
Dec 17 12:26:19.942: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166540834s
Dec 17 12:26:22.000: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224753191s
Dec 17 12:26:24.045: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269114486s
Dec 17 12:26:26.581: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.805499163s
STEP: Saw pod success
Dec 17 12:26:26.581: INFO: Pod "client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:26:26.612: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 12:26:26.857: INFO: Waiting for pod client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:26:26.934: INFO: Pod client-containers-6b7a4618-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:26:26.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vw55h" for this suite.
Dec 17 12:26:33.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:26:33.115: INFO: namespace: e2e-tests-containers-vw55h, resource: bindings, ignored listing per whitelist
Dec 17 12:26:33.155: INFO: namespace e2e-tests-containers-vw55h deletion completed in 6.170466772s

• [SLOW TEST:17.698 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:26:33.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-kjlqh/configmap-test-760510db-20c8-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:26:33.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-kjlqh" to be "success or failure"
Dec 17 12:26:33.412: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.199829ms
Dec 17 12:26:35.440: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037630128s
Dec 17 12:26:37.473: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071030189s
Dec 17 12:26:39.514: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112387773s
Dec 17 12:26:41.535: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133061513s
Dec 17 12:26:43.548: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146191671s
STEP: Saw pod success
Dec 17 12:26:43.548: INFO: Pod "pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:26:43.553: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004 container env-test: 
STEP: delete the pod
Dec 17 12:26:43.688: INFO: Waiting for pod pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:26:43.704: INFO: Pod pod-configmaps-76064af8-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:26:43.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kjlqh" for this suite.
Dec 17 12:26:50.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:26:50.830: INFO: namespace: e2e-tests-configmap-kjlqh, resource: bindings, ignored listing per whitelist
Dec 17 12:26:50.961: INFO: namespace e2e-tests-configmap-kjlqh deletion completed in 7.244375727s

• [SLOW TEST:17.806 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:26:50.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:26:51.300: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 17 12:26:51.327: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 17 12:26:56.346: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 12:27:02.383: INFO: Creating deployment "test-rolling-update-deployment"
Dec 17 12:27:02.402: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 17 12:27:02.416: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 17 12:27:04.660: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 17 12:27:04.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:27:06.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:27:09.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:27:10.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:27:12.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712182422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:27:14.947: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 17 12:27:14.985: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-khrhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-khrhd/deployments/test-rolling-update-deployment,UID:874f2fd7-20c8-11ea-a994-fa163e34d433,ResourceVersion:15123143,Generation:1,CreationTimestamp:2019-12-17 12:27:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-17 12:27:02 +0000 UTC 2019-12-17 12:27:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-17 12:27:13 +0000 UTC 2019-12-17 12:27:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 17 12:27:14.993: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-khrhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-khrhd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:8755ef54-20c8-11ea-a994-fa163e34d433,ResourceVersion:15123132,Generation:1,CreationTimestamp:2019-12-17 12:27:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 874f2fd7-20c8-11ea-a994-fa163e34d433 0xc00251d557 0xc00251d558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 17 12:27:14.993: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 17 12:27:14.994: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-khrhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-khrhd/replicasets/test-rolling-update-controller,UID:80b3a799-20c8-11ea-a994-fa163e34d433,ResourceVersion:15123142,Generation:2,CreationTimestamp:2019-12-17 12:26:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 874f2fd7-20c8-11ea-a994-fa163e34d433 0xc00251d487 0xc00251d488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 12:27:15.007: INFO: Pod "test-rolling-update-deployment-75db98fb4c-7p4pt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-7p4pt,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-khrhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-khrhd/pods/test-rolling-update-deployment-75db98fb4c-7p4pt,UID:875948fb-20c8-11ea-a994-fa163e34d433,ResourceVersion:15123131,Generation:0,CreationTimestamp:2019-12-17 12:27:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 8755ef54-20c8-11ea-a994-fa163e34d433 0xc0024788c7 0xc0024788c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7cg5t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7cg5t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7cg5t true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002478930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002478950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:27:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:27:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:27:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:27:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-17 12:27:02 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-17 12:27:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e0e4c9121c068cfa94771041b89418e99eda8604c11bb7063848f5f144001b59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:27:15.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-khrhd" for this suite.
Dec 17 12:27:23.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:27:23.174: INFO: namespace: e2e-tests-deployment-khrhd, resource: bindings, ignored listing per whitelist
Dec 17 12:27:23.348: INFO: namespace e2e-tests-deployment-khrhd deletion completed in 8.262864879s

• [SLOW TEST:32.387 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:27:23.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-94bacd47-20c8-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:27:25.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-c7sh4" to be "success or failure"
Dec 17 12:27:25.118: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 63.775784ms
Dec 17 12:27:27.134: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079147802s
Dec 17 12:27:29.166: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111849565s
Dec 17 12:27:31.488: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433136707s
Dec 17 12:27:34.448: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.393190008s
Dec 17 12:27:36.468: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.413894122s
STEP: Saw pod success
Dec 17 12:27:36.469: INFO: Pod "pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:27:36.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 17 12:27:37.072: INFO: Waiting for pod pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:27:37.143: INFO: Pod pod-configmaps-94bdb772-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:27:37.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c7sh4" for this suite.
Dec 17 12:27:45.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:27:45.285: INFO: namespace: e2e-tests-configmap-c7sh4, resource: bindings, ignored listing per whitelist
Dec 17 12:27:45.502: INFO: namespace e2e-tests-configmap-c7sh4 deletion completed in 8.346947078s

• [SLOW TEST:22.153 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:27:45.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a11f219f-20c8-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 12:27:45.742: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-xts5l" to be "success or failure"
Dec 17 12:27:45.887: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 145.165519ms
Dec 17 12:27:48.494: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752143313s
Dec 17 12:27:50.528: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.786113153s
Dec 17 12:27:52.573: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.831497563s
Dec 17 12:27:54.597: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.85467385s
Dec 17 12:27:56.606: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864151643s
STEP: Saw pod success
Dec 17 12:27:56.606: INFO: Pod "pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:27:56.614: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 12:27:57.309: INFO: Waiting for pod pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:27:57.569: INFO: Pod pod-projected-secrets-a120dafc-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:27:57.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xts5l" for this suite.
Dec 17 12:28:03.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:28:03.970: INFO: namespace: e2e-tests-projected-xts5l, resource: bindings, ignored listing per whitelist
Dec 17 12:28:04.076: INFO: namespace e2e-tests-projected-xts5l deletion completed in 6.494856984s

• [SLOW TEST:18.574 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:28:04.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 12:28:04.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:04.817: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 12:28:04.817: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 17 12:28:04.922: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 17 12:28:04.974: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 17 12:28:05.015: INFO: scanned /root for discovery docs: 
Dec 17 12:28:05.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:31.166: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 17 12:28:31.167: INFO: stdout: "Created e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb\nScaling up e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 17 12:28:31.167: INFO: stdout: "Created e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb\nScaling up e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 17 12:28:31.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:31.423: INFO: stderr: ""
Dec 17 12:28:31.423: INFO: stdout: "e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb-mdgmd e2e-test-nginx-rc-sfll9 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 17 12:28:36.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:36.691: INFO: stderr: ""
Dec 17 12:28:36.692: INFO: stdout: "e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb-mdgmd "
Dec 17 12:28:36.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb-mdgmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:36.796: INFO: stderr: ""
Dec 17 12:28:36.796: INFO: stdout: "true"
Dec 17 12:28:36.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb-mdgmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:36.963: INFO: stderr: ""
Dec 17 12:28:36.963: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 17 12:28:36.963: INFO: e2e-test-nginx-rc-5e3a1b8f812a86947ccb7b1f91e1b1fb-mdgmd is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 17 12:28:36.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2f6fz'
Dec 17 12:28:37.105: INFO: stderr: ""
Dec 17 12:28:37.105: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:28:37.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2f6fz" for this suite.
Dec 17 12:28:45.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:28:45.228: INFO: namespace: e2e-tests-kubectl-2f6fz, resource: bindings, ignored listing per whitelist
Dec 17 12:28:45.337: INFO: namespace e2e-tests-kubectl-2f6fz deletion completed in 8.224445869s

• [SLOW TEST:41.261 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:28:45.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 17 12:28:56.309: INFO: Successfully updated pod "annotationupdatec4d1aaea-20c8-11ea-a5ef-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:28:58.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n4v5b" for this suite.
Dec 17 12:29:22.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:29:22.684: INFO: namespace: e2e-tests-projected-n4v5b, resource: bindings, ignored listing per whitelist
Dec 17 12:29:22.757: INFO: namespace e2e-tests-projected-n4v5b deletion completed in 24.26462859s

• [SLOW TEST:37.419 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:29:22.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 17 12:29:33.819: INFO: Successfully updated pod "pod-update-activedeadlineseconds-db3c3881-20c8-11ea-a5ef-0242ac110004"
Dec 17 12:29:33.819: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-db3c3881-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-pods-c6z7v" to be "terminated due to deadline exceeded"
Dec 17 12:29:33.853: INFO: Pod "pod-update-activedeadlineseconds-db3c3881-20c8-11ea-a5ef-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 34.587464ms
Dec 17 12:29:35.884: INFO: Pod "pod-update-activedeadlineseconds-db3c3881-20c8-11ea-a5ef-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.065258452s
Dec 17 12:29:35.884: INFO: Pod "pod-update-activedeadlineseconds-db3c3881-20c8-11ea-a5ef-0242ac110004" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:29:35.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-c6z7v" for this suite.
Dec 17 12:29:42.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:29:42.501: INFO: namespace: e2e-tests-pods-c6z7v, resource: bindings, ignored listing per whitelist
Dec 17 12:29:42.949: INFO: namespace e2e-tests-pods-c6z7v deletion completed in 7.038641771s

• [SLOW TEST:20.192 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:29:42.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e72eb957-20c8-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 12:29:43.439: INFO: Waiting up to 5m0s for pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-plvj2" to be "success or failure"
Dec 17 12:29:43.450: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.031148ms
Dec 17 12:29:45.912: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473093265s
Dec 17 12:29:47.933: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493280341s
Dec 17 12:29:50.284: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.844796819s
Dec 17 12:29:52.319: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.880104659s
Dec 17 12:29:54.352: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.913011421s
Dec 17 12:29:56.647: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.207756854s
STEP: Saw pod success
Dec 17 12:29:56.647: INFO: Pod "pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:29:56.664: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 17 12:29:56.730: INFO: Waiting for pod pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:29:56.736: INFO: Pod pod-secrets-e732ba37-20c8-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:29:56.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-plvj2" for this suite.
Dec 17 12:30:02.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:30:02.969: INFO: namespace: e2e-tests-secrets-plvj2, resource: bindings, ignored listing per whitelist
Dec 17 12:30:03.012: INFO: namespace e2e-tests-secrets-plvj2 deletion completed in 6.226793502s

• [SLOW TEST:20.063 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:30:03.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:31:03.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5gmpq" for this suite.
Dec 17 12:31:27.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:31:27.619: INFO: namespace: e2e-tests-container-probe-5gmpq, resource: bindings, ignored listing per whitelist
Dec 17 12:31:27.736: INFO: namespace e2e-tests-container-probe-5gmpq deletion completed in 24.297643346s

• [SLOW TEST:84.723 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:31:27.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 17 12:31:38.696: INFO: Successfully updated pod "labelsupdate259cd654-20c9-11ea-a5ef-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:31:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vhv6l" for this suite.
Dec 17 12:32:06.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:32:07.063: INFO: namespace: e2e-tests-projected-vhv6l, resource: bindings, ignored listing per whitelist
Dec 17 12:32:07.075: INFO: namespace e2e-tests-projected-vhv6l deletion completed in 26.158550759s

• [SLOW TEST:39.338 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:32:07.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2ks9w
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 17 12:32:07.348: INFO: Found 0 stateful pods, waiting for 3
Dec 17 12:32:17.367: INFO: Found 1 stateful pods, waiting for 3
Dec 17 12:32:27.477: INFO: Found 2 stateful pods, waiting for 3
Dec 17 12:32:37.504: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:32:37.504: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:32:37.504: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 12:32:47.377: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:32:47.377: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:32:47.377: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:32:47.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ks9w ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:32:48.201: INFO: stderr: ""
Dec 17 12:32:48.201: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:32:48.201: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 17 12:32:48.278: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 17 12:32:58.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ks9w ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:32:59.108: INFO: stderr: ""
Dec 17 12:32:59.108: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 12:32:59.108: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 12:32:59.317: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:32:59.317: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:32:59.317: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:32:59.317: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:09.351: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:33:09.351: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:09.351: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:09.351: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:19.823: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:33:19.823: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:19.823: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:29.345: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:33:29.345: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:29.345: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:39.813: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:33:39.813: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 12:33:59.356: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 17 12:34:09.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ks9w ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:34:10.137: INFO: stderr: ""
Dec 17 12:34:10.137: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:34:10.137: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 12:34:20.233: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 17 12:34:30.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ks9w ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:34:31.251: INFO: stderr: ""
Dec 17 12:34:31.251: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 12:34:31.251: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 12:34:41.523: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:34:41.523: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:34:41.523: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:34:51.544: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:34:51.544: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:34:51.544: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:35:02.725: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:35:02.725: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:35:11.542: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:35:11.543: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:35:21.544: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
Dec 17 12:35:21.544: INFO: Waiting for Pod e2e-tests-statefulset-2ks9w/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 12:35:31.555: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ks9w/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 17 12:35:41.552: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2ks9w
Dec 17 12:35:41.558: INFO: Scaling statefulset ss2 to 0
Dec 17 12:36:21.683: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 12:36:21.700: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:36:21.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2ks9w" for this suite.
Dec 17 12:36:29.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:36:30.077: INFO: namespace: e2e-tests-statefulset-2ks9w, resource: bindings, ignored listing per whitelist
Dec 17 12:36:30.175: INFO: namespace e2e-tests-statefulset-2ks9w deletion completed in 8.355669767s

• [SLOW TEST:263.100 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:36:30.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 12:36:30.536: INFO: Number of nodes with available pods: 0
Dec 17 12:36:30.536: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:31.929: INFO: Number of nodes with available pods: 0
Dec 17 12:36:31.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:32.656: INFO: Number of nodes with available pods: 0
Dec 17 12:36:32.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:33.806: INFO: Number of nodes with available pods: 0
Dec 17 12:36:33.806: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:34.572: INFO: Number of nodes with available pods: 0
Dec 17 12:36:34.572: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:35.654: INFO: Number of nodes with available pods: 0
Dec 17 12:36:35.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:36.577: INFO: Number of nodes with available pods: 0
Dec 17 12:36:36.578: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:37.605: INFO: Number of nodes with available pods: 0
Dec 17 12:36:37.605: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:38.592: INFO: Number of nodes with available pods: 0
Dec 17 12:36:38.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:39.562: INFO: Number of nodes with available pods: 0
Dec 17 12:36:39.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:36:40.603: INFO: Number of nodes with available pods: 1
Dec 17 12:36:40.603: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 17 12:36:41.059: INFO: Number of nodes with available pods: 1
Dec 17 12:36:41.059: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5tk2x, will wait for the garbage collector to delete the pods
Dec 17 12:36:43.173: INFO: Deleting DaemonSet.extensions daemon-set took: 1.00814846s
Dec 17 12:36:43.774: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.24006ms
Dec 17 12:36:48.135: INFO: Number of nodes with available pods: 0
Dec 17 12:36:48.135: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 12:36:48.143: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5tk2x/daemonsets","resourceVersion":"15124490"},"items":null}

Dec 17 12:36:48.148: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5tk2x/pods","resourceVersion":"15124490"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:36:48.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5tk2x" for this suite.
Dec 17 12:36:56.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:36:56.319: INFO: namespace: e2e-tests-daemonsets-5tk2x, resource: bindings, ignored listing per whitelist
Dec 17 12:36:56.402: INFO: namespace e2e-tests-daemonsets-5tk2x deletion completed in 8.223849405s

• [SLOW TEST:26.226 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:36:56.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e989710b-20c9-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:36:56.776: INFO: Waiting up to 5m0s for pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-bd65k" to be "success or failure"
Dec 17 12:36:56.825: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 49.205082ms
Dec 17 12:36:58.831: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055770218s
Dec 17 12:37:00.881: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105029929s
Dec 17 12:37:03.855: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.079738701s
Dec 17 12:37:05.906: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.130553284s
Dec 17 12:37:07.939: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.163223637s
STEP: Saw pod success
Dec 17 12:37:07.939: INFO: Pod "pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:37:07.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 17 12:37:08.703: INFO: Waiting for pod pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:37:08.712: INFO: Pod pod-configmaps-e992a858-20c9-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:37:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bd65k" for this suite.
Dec 17 12:37:14.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:37:15.104: INFO: namespace: e2e-tests-configmap-bd65k, resource: bindings, ignored listing per whitelist
Dec 17 12:37:15.122: INFO: namespace e2e-tests-configmap-bd65k deletion completed in 6.405605861s

• [SLOW TEST:18.719 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:37:15.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:37:15.578: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 12:37:15.607: INFO: Number of nodes with available pods: 0
Dec 17 12:37:15.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:16.625: INFO: Number of nodes with available pods: 0
Dec 17 12:37:16.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:18.985: INFO: Number of nodes with available pods: 0
Dec 17 12:37:18.985: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:19.643: INFO: Number of nodes with available pods: 0
Dec 17 12:37:19.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:20.658: INFO: Number of nodes with available pods: 0
Dec 17 12:37:20.658: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:21.645: INFO: Number of nodes with available pods: 0
Dec 17 12:37:21.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:22.742: INFO: Number of nodes with available pods: 0
Dec 17 12:37:22.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:23.689: INFO: Number of nodes with available pods: 0
Dec 17 12:37:23.689: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:24.663: INFO: Number of nodes with available pods: 0
Dec 17 12:37:24.663: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:26.034: INFO: Number of nodes with available pods: 0
Dec 17 12:37:26.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:26.665: INFO: Number of nodes with available pods: 0
Dec 17 12:37:26.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:27.621: INFO: Number of nodes with available pods: 1
Dec 17 12:37:27.621: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 17 12:37:27.726: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:28.788: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:29.801: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:30.940: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:31.815: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:32.798: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:33.798: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:34.808: INFO: Wrong image for pod: daemon-set-l8qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 12:37:34.808: INFO: Pod daemon-set-l8qk4 is not available
Dec 17 12:37:35.824: INFO: Pod daemon-set-swkp2 is not available
Dec 17 12:37:36.796: INFO: Pod daemon-set-swkp2 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 17 12:37:36.809: INFO: Number of nodes with available pods: 0
Dec 17 12:37:36.809: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:37.876: INFO: Number of nodes with available pods: 0
Dec 17 12:37:37.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:38.841: INFO: Number of nodes with available pods: 0
Dec 17 12:37:38.841: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:39.866: INFO: Number of nodes with available pods: 0
Dec 17 12:37:39.866: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:41.264: INFO: Number of nodes with available pods: 0
Dec 17 12:37:41.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:42.297: INFO: Number of nodes with available pods: 0
Dec 17 12:37:42.297: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:42.832: INFO: Number of nodes with available pods: 0
Dec 17 12:37:42.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:43.843: INFO: Number of nodes with available pods: 0
Dec 17 12:37:43.843: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:44.861: INFO: Number of nodes with available pods: 0
Dec 17 12:37:44.862: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:37:45.920: INFO: Number of nodes with available pods: 1
Dec 17 12:37:45.921: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nnm6w, will wait for the garbage collector to delete the pods
Dec 17 12:37:46.021: INFO: Deleting DaemonSet.extensions daemon-set took: 14.936898ms
Dec 17 12:37:46.122: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.702979ms
Dec 17 12:37:52.738: INFO: Number of nodes with available pods: 0
Dec 17 12:37:52.738: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 12:37:52.743: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nnm6w/daemonsets","resourceVersion":"15124654"},"items":null}

Dec 17 12:37:52.748: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nnm6w/pods","resourceVersion":"15124654"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:37:52.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nnm6w" for this suite.
Dec 17 12:37:58.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:37:58.965: INFO: namespace: e2e-tests-daemonsets-nnm6w, resource: bindings, ignored listing per whitelist
Dec 17 12:37:58.986: INFO: namespace e2e-tests-daemonsets-nnm6w deletion completed in 6.221086358s

• [SLOW TEST:43.864 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:37:58.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 17 12:37:59.396: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124685,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 12:37:59.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124686,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 17 12:37:59.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124687,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 17 12:38:09.475: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124701,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 12:38:09.476: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124702,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 17 12:38:09.476: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6cpn8,SelfLink:/api/v1/namespaces/e2e-tests-watch-6cpn8/configmaps/e2e-watch-test-label-changed,UID:0ed0f7d0-20ca-11ea-a994-fa163e34d433,ResourceVersion:15124703,Generation:0,CreationTimestamp:2019-12-17 12:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:38:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6cpn8" for this suite.
Dec 17 12:38:15.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:38:15.739: INFO: namespace: e2e-tests-watch-6cpn8, resource: bindings, ignored listing per whitelist
Dec 17 12:38:15.929: INFO: namespace e2e-tests-watch-6cpn8 deletion completed in 6.443904751s

• [SLOW TEST:16.942 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:38:15.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:38:26.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-kcsl9" for this suite.
Dec 17 12:39:08.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:39:08.493: INFO: namespace: e2e-tests-kubelet-test-kcsl9, resource: bindings, ignored listing per whitelist
Dec 17 12:39:08.523: INFO: namespace e2e-tests-kubelet-test-kcsl9 deletion completed in 42.227841097s

• [SLOW TEST:52.594 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:39:08.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 12:39:08.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6gw26'
Dec 17 12:39:10.775: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 12:39:10.775: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 17 12:39:12.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6gw26'
Dec 17 12:39:13.694: INFO: stderr: ""
Dec 17 12:39:13.695: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:39:13.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6gw26" for this suite.
Dec 17 12:39:20.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:39:20.213: INFO: namespace: e2e-tests-kubectl-6gw26, resource: bindings, ignored listing per whitelist
Dec 17 12:39:20.250: INFO: namespace e2e-tests-kubectl-6gw26 deletion completed in 6.263048429s

• [SLOW TEST:11.726 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:39:20.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3f415a32-20ca-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 12:39:20.563: INFO: Waiting up to 5m0s for pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-klkmb" to be "success or failure"
Dec 17 12:39:20.576: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.372233ms
Dec 17 12:39:22.611: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048335026s
Dec 17 12:39:24.629: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066206858s
Dec 17 12:39:27.182: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618892453s
Dec 17 12:39:29.203: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639720015s
Dec 17 12:39:31.216: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.653563983s
STEP: Saw pod success
Dec 17 12:39:31.216: INFO: Pod "pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:39:31.221: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 17 12:39:31.683: INFO: Waiting for pod pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:39:32.054: INFO: Pod pod-secrets-3f43be79-20ca-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:39:32.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-klkmb" for this suite.
Dec 17 12:39:38.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:39:38.302: INFO: namespace: e2e-tests-secrets-klkmb, resource: bindings, ignored listing per whitelist
Dec 17 12:39:38.392: INFO: namespace e2e-tests-secrets-klkmb deletion completed in 6.325243978s

• [SLOW TEST:18.142 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:39:38.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:39:38.763: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 17 12:39:38.781: INFO: Number of nodes with available pods: 0
Dec 17 12:39:38.781: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 17 12:39:38.857: INFO: Number of nodes with available pods: 0
Dec 17 12:39:38.857: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:39.884: INFO: Number of nodes with available pods: 0
Dec 17 12:39:39.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:41.156: INFO: Number of nodes with available pods: 0
Dec 17 12:39:41.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:41.939: INFO: Number of nodes with available pods: 0
Dec 17 12:39:41.939: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:42.917: INFO: Number of nodes with available pods: 0
Dec 17 12:39:42.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:43.889: INFO: Number of nodes with available pods: 0
Dec 17 12:39:43.889: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:45.697: INFO: Number of nodes with available pods: 0
Dec 17 12:39:45.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:45.972: INFO: Number of nodes with available pods: 0
Dec 17 12:39:45.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:46.881: INFO: Number of nodes with available pods: 0
Dec 17 12:39:46.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:47.913: INFO: Number of nodes with available pods: 0
Dec 17 12:39:47.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:48.879: INFO: Number of nodes with available pods: 1
Dec 17 12:39:48.879: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 17 12:39:49.061: INFO: Number of nodes with available pods: 1
Dec 17 12:39:49.061: INFO: Number of running nodes: 0, number of available pods: 1
Dec 17 12:39:50.079: INFO: Number of nodes with available pods: 0
Dec 17 12:39:50.079: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 17 12:39:50.124: INFO: Number of nodes with available pods: 0
Dec 17 12:39:50.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:51.464: INFO: Number of nodes with available pods: 0
Dec 17 12:39:51.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:52.146: INFO: Number of nodes with available pods: 0
Dec 17 12:39:52.146: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:53.492: INFO: Number of nodes with available pods: 0
Dec 17 12:39:53.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:54.138: INFO: Number of nodes with available pods: 0
Dec 17 12:39:54.138: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:55.142: INFO: Number of nodes with available pods: 0
Dec 17 12:39:55.142: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:56.140: INFO: Number of nodes with available pods: 0
Dec 17 12:39:56.141: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:57.142: INFO: Number of nodes with available pods: 0
Dec 17 12:39:57.142: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:58.144: INFO: Number of nodes with available pods: 0
Dec 17 12:39:58.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:39:59.144: INFO: Number of nodes with available pods: 0
Dec 17 12:39:59.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:00.161: INFO: Number of nodes with available pods: 0
Dec 17 12:40:00.161: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:01.145: INFO: Number of nodes with available pods: 0
Dec 17 12:40:01.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:02.217: INFO: Number of nodes with available pods: 0
Dec 17 12:40:02.217: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:04.407: INFO: Number of nodes with available pods: 0
Dec 17 12:40:04.407: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:05.479: INFO: Number of nodes with available pods: 0
Dec 17 12:40:05.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:06.153: INFO: Number of nodes with available pods: 0
Dec 17 12:40:06.153: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:07.217: INFO: Number of nodes with available pods: 0
Dec 17 12:40:07.217: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:08.160: INFO: Number of nodes with available pods: 0
Dec 17 12:40:08.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:09.604: INFO: Number of nodes with available pods: 0
Dec 17 12:40:09.604: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:10.163: INFO: Number of nodes with available pods: 0
Dec 17 12:40:10.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:11.139: INFO: Number of nodes with available pods: 0
Dec 17 12:40:11.139: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:12.153: INFO: Number of nodes with available pods: 0
Dec 17 12:40:12.153: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 12:40:13.162: INFO: Number of nodes with available pods: 1
Dec 17 12:40:13.162: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zq2wd, will wait for the garbage collector to delete the pods
Dec 17 12:40:13.268: INFO: Deleting DaemonSet.extensions daemon-set took: 36.725927ms
Dec 17 12:40:13.469: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.47251ms
Dec 17 12:40:22.927: INFO: Number of nodes with available pods: 0
Dec 17 12:40:22.927: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 12:40:22.940: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zq2wd/daemonsets","resourceVersion":"15124997"},"items":null}

Dec 17 12:40:22.956: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zq2wd/pods","resourceVersion":"15124998"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:40:22.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zq2wd" for this suite.
Dec 17 12:40:29.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:40:29.316: INFO: namespace: e2e-tests-daemonsets-zq2wd, resource: bindings, ignored listing per whitelist
Dec 17 12:40:29.358: INFO: namespace e2e-tests-daemonsets-zq2wd deletion completed in 6.348777136s

• [SLOW TEST:50.966 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:40:29.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:40:39.773: INFO: Waiting up to 5m0s for pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004" in namespace "e2e-tests-pods-66t9q" to be "success or failure"
Dec 17 12:40:39.973: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 200.565173ms
Dec 17 12:40:42.028: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254876506s
Dec 17 12:40:44.070: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297606064s
Dec 17 12:40:46.613: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840707518s
Dec 17 12:40:48.639: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865897083s
Dec 17 12:40:50.655: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.882740945s
STEP: Saw pod success
Dec 17 12:40:50.656: INFO: Pod "client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:40:50.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004 container env3cont: 
STEP: delete the pod
Dec 17 12:40:51.429: INFO: Waiting for pod client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:40:52.083: INFO: Pod client-envvars-6e7c74c9-20ca-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:40:52.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-66t9q" for this suite.
Dec 17 12:41:46.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:41:46.466: INFO: namespace: e2e-tests-pods-66t9q, resource: bindings, ignored listing per whitelist
Dec 17 12:41:46.498: INFO: namespace e2e-tests-pods-66t9q deletion completed in 54.398574335s

• [SLOW TEST:77.139 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:41:46.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:42:49.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-llfpn" for this suite.
Dec 17 12:42:57.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:42:57.617: INFO: namespace: e2e-tests-container-runtime-llfpn, resource: bindings, ignored listing per whitelist
Dec 17 12:42:57.628: INFO: namespace e2e-tests-container-runtime-llfpn deletion completed in 8.224506931s

• [SLOW TEST:71.128 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:42:57.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-5f2n4
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5f2n4
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5f2n4
Dec 17 12:42:57.924: INFO: Found 0 stateful pods, waiting for 1
Dec 17 12:43:07.941: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 17 12:43:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:43:08.575: INFO: stderr: ""
Dec 17 12:43:08.576: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:43:08.576: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 12:43:08.589: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 17 12:43:18.630: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 12:43:18.631: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 12:43:18.695: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999725s
Dec 17 12:43:19.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.9764405s
Dec 17 12:43:20.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.957961819s
Dec 17 12:43:21.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.938233198s
Dec 17 12:43:22.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.912674386s
Dec 17 12:43:23.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.869791537s
Dec 17 12:43:24.886: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.839610264s
Dec 17 12:43:25.912: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.784741863s
Dec 17 12:43:26.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.759592349s
Dec 17 12:43:28.000: INFO: Verifying statefulset ss doesn't scale past 1 for another 742.022075ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5f2n4
Dec 17 12:43:29.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:43:30.086: INFO: stderr: ""
Dec 17 12:43:30.086: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 12:43:30.086: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 12:43:30.181: INFO: Found 1 stateful pods, waiting for 3
Dec 17 12:43:40.371: INFO: Found 2 stateful pods, waiting for 3
Dec 17 12:43:50.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:43:50.224: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:43:50.224: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 12:44:00.213: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:44:00.213: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 12:44:00.213: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 17 12:44:00.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:44:01.055: INFO: stderr: ""
Dec 17 12:44:01.055: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:44:01.055: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 12:44:01.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:44:01.539: INFO: stderr: ""
Dec 17 12:44:01.539: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:44:01.539: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 12:44:01.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 12:44:01.995: INFO: stderr: ""
Dec 17 12:44:01.995: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 12:44:01.995: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 12:44:01.995: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 12:44:02.012: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 17 12:44:12.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 12:44:12.115: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 12:44:12.115: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 12:44:12.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999556s
Dec 17 12:44:13.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985432501s
Dec 17 12:44:14.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972483362s
Dec 17 12:44:15.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960259688s
Dec 17 12:44:16.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.666771591s
Dec 17 12:44:17.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.628789313s
Dec 17 12:44:18.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.607153123s
Dec 17 12:44:19.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.569775654s
Dec 17 12:44:20.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.497406695s
Dec 17 12:44:21.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 419.37591ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5f2n4
Dec 17 12:44:22.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:23.396: INFO: stderr: ""
Dec 17 12:44:23.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 12:44:23.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 12:44:23.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:24.308: INFO: stderr: ""
Dec 17 12:44:24.308: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 12:44:24.308: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 12:44:24.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:25.126: INFO: rc: 126
Dec 17 12:44:25.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/1087/ns/ipc\" caused \"lstat /proc/1087/ns/ipc: no such file or directory\"": unknown
 command terminated with exit code 126
 []  0xc000a4c000 exit status 126   true [0xc001ed0650 0xc001ed0670 0xc001ed0688] [0xc001ed0650 0xc001ed0670 0xc001ed0688] [0xc001ed0668 0xc001ed0680] [0x935700 0x935700] 0xc001fe52c0 }:
Command stdout:
OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/1087/ns/ipc\" caused \"lstat /proc/1087/ns/ipc: no such file or directory\"": unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 17 12:44:35.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:35.239: INFO: rc: 1
Dec 17 12:44:35.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ec8ed0 exit status 1   true [0xc0013b03b0 0xc0013b03c8 0xc0013b03e0] [0xc0013b03b0 0xc0013b03c8 0xc0013b03e0] [0xc0013b03c0 0xc0013b03d8] [0x935700 0x935700] 0xc001a9df20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:44:45.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:45.366: INFO: rc: 1
Dec 17 12:44:45.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca540 exit status 1   true [0xc000304360 0xc000304448 0xc0003044e8] [0xc000304360 0xc000304448 0xc0003044e8] [0xc0003043c8 0xc0003044c8] [0x935700 0x935700] 0xc001bc83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:44:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:44:55.568: INFO: rc: 1
Dec 17 12:44:55.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001044330 exit status 1   true [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e038 0xc00053e050] [0x935700 0x935700] 0xc001a80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:05.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:05.722: INFO: rc: 1
Dec 17 12:45:05.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca6c0 exit status 1   true [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304580 0xc0003045d8] [0x935700 0x935700] 0xc001bc8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:15.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:15.910: INFO: rc: 1
Dec 17 12:45:15.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca870 exit status 1   true [0xc0003045e8 0xc000304658 0xc000304680] [0xc0003045e8 0xc000304658 0xc000304680] [0xc000304640 0xc000304670] [0x935700 0x935700] 0xc001bc90e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:25.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:26.078: INFO: rc: 1
Dec 17 12:45:26.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001acaf00 exit status 1   true [0xc0003046a0 0xc0003046f8 0xc000304730] [0xc0003046a0 0xc0003046f8 0xc000304730] [0xc0003046d0 0xc000304728] [0x935700 0x935700] 0xc001b1c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:36.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:36.248: INFO: rc: 1
Dec 17 12:45:36.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010444b0 exit status 1   true [0xc00053e088 0xc00053e0c8 0xc00053e108] [0xc00053e088 0xc00053e0c8 0xc00053e108] [0xc00053e0b0 0xc00053e100] [0x935700 0x935700] 0xc001a805a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:46.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:46.422: INFO: rc: 1
Dec 17 12:45:46.422: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba120 exit status 1   true [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a010 0xc00200a028] [0x935700 0x935700] 0xc001c30240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:45:56.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:45:56.735: INFO: rc: 1
Dec 17 12:45:56.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001acb1d0 exit status 1   true [0xc000304738 0xc000304790 0xc000304800] [0xc000304738 0xc000304790 0xc000304800] [0xc000304760 0xc0003047a8] [0x935700 0x935700] 0xc001b1ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:06.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:06.891: INFO: rc: 1
Dec 17 12:46:06.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc1b0 exit status 1   true [0xc0014e4000 0xc0014e4018 0xc0014e4030] [0xc0014e4000 0xc0014e4018 0xc0014e4030] [0xc0014e4010 0xc0014e4028] [0x935700 0x935700] 0xc0012e65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:17.045: INFO: rc: 1
Dec 17 12:46:17.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba270 exit status 1   true [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a048 0xc00200a060] [0x935700 0x935700] 0xc001c30540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:27.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:27.188: INFO: rc: 1
Dec 17 12:46:27.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc330 exit status 1   true [0xc0014e4038 0xc0014e4050 0xc0014e4068] [0xc0014e4038 0xc0014e4050 0xc0014e4068] [0xc0014e4048 0xc0014e4060] [0x935700 0x935700] 0xc0012e6fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:37.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:37.364: INFO: rc: 1
Dec 17 12:46:37.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001acb3b0 exit status 1   true [0xc000304808 0xc000304858 0xc000304908] [0xc000304808 0xc000304858 0xc000304908] [0xc000304838 0xc000304900] [0x935700 0x935700] 0xc001b1d8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:47.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:47.594: INFO: rc: 1
Dec 17 12:46:47.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca570 exit status 1   true [0xc000304360 0xc000304448 0xc0003044e8] [0xc000304360 0xc000304448 0xc0003044e8] [0xc0003043c8 0xc0003044c8] [0x935700 0x935700] 0xc001bc83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:46:57.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:46:57.757: INFO: rc: 1
Dec 17 12:46:57.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba150 exit status 1   true [0xc0014e4000 0xc0014e4018 0xc0014e4030] [0xc0014e4000 0xc0014e4018 0xc0014e4030] [0xc0014e4010 0xc0014e4028] [0x935700 0x935700] 0xc001b1c8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:07.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:07.977: INFO: rc: 1
Dec 17 12:47:07.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc210 exit status 1   true [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a010 0xc00200a028] [0x935700 0x935700] 0xc0012e65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:17.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:18.181: INFO: rc: 1
Dec 17 12:47:18.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc3c0 exit status 1   true [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a048 0xc00200a060] [0x935700 0x935700] 0xc0012e6fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:28.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:28.408: INFO: rc: 1
Dec 17 12:47:28.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc4e0 exit status 1   true [0xc00200a070 0xc00200a088 0xc00200a0a0] [0xc00200a070 0xc00200a088 0xc00200a0a0] [0xc00200a080 0xc00200a098] [0x935700 0x935700] 0xc0012e7a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:38.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:38.605: INFO: rc: 1
Dec 17 12:47:38.606: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca750 exit status 1   true [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304508 0xc0003045b8 0xc0003045e0] [0xc000304580 0xc0003045d8] [0x935700 0x935700] 0xc001bc8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:48.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:48.759: INFO: rc: 1
Dec 17 12:47:48.760: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbc630 exit status 1   true [0xc00200a0a8 0xc00200a0c0 0xc00200a0d8] [0xc00200a0a8 0xc00200a0c0 0xc00200a0d8] [0xc00200a0b8 0xc00200a0d0] [0x935700 0x935700] 0xc0012e7d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:47:58.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:47:58.929: INFO: rc: 1
Dec 17 12:47:58.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000dbcb40 exit status 1   true [0xc00200a0e0 0xc00200a0f8 0xc00200a110] [0xc00200a0e0 0xc00200a0f8 0xc00200a110] [0xc00200a0f0 0xc00200a108] [0x935700 0x935700] 0xc001c30120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:08.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:48:09.144: INFO: rc: 1
Dec 17 12:48:09.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001044360 exit status 1   true [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e038 0xc00053e050] [0x935700 0x935700] 0xc001a80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:19.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:48:19.276: INFO: rc: 1
Dec 17 12:48:19.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010445a0 exit status 1   true [0xc00053e088 0xc00053e0c8 0xc00053e108] [0xc00053e088 0xc00053e0c8 0xc00053e108] [0xc00053e0b0 0xc00053e100] [0x935700 0x935700] 0xc001a805a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:48:29.445: INFO: rc: 1
Dec 17 12:48:29.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba300 exit status 1   true [0xc0014e4038 0xc0014e4050 0xc0014e4068] [0xc0014e4038 0xc0014e4050 0xc0014e4068] [0xc0014e4048 0xc0014e4060] [0x935700 0x935700] 0xc001b1d2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:39.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:48:39.637: INFO: rc: 1
Dec 17 12:48:39.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001acaed0 exit status 1   true [0xc0003045e8 0xc000304658 0xc000304680] [0xc0003045e8 0xc000304658 0xc000304680] [0xc000304640 0xc000304670] [0x935700 0x935700] 0xc001bc90e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:49.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:48:49.836: INFO: rc: 1
Dec 17 12:48:49.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001aca540 exit status 1   true [0xc000304360 0xc000304448 0xc0003044e8] [0xc000304360 0xc000304448 0xc0003044e8] [0xc0003043c8 0xc0003044c8] [0x935700 0x935700] 0xc0012e65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:48:59.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:49:00.038: INFO: rc: 1
Dec 17 12:49:00.039: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001044330 exit status 1   true [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e000 0xc00053e040 0xc00053e068] [0xc00053e038 0xc00053e050] [0x935700 0x935700] 0xc001bc83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:49:10.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:49:10.186: INFO: rc: 1
Dec 17 12:49:10.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba120 exit status 1   true [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a000 0xc00200a018 0xc00200a030] [0xc00200a010 0xc00200a028] [0x935700 0x935700] 0xc001a80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:49:20.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:49:20.302: INFO: rc: 1
Dec 17 12:49:20.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000aba2a0 exit status 1   true [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a038 0xc00200a050 0xc00200a068] [0xc00200a048 0xc00200a060] [0x935700 0x935700] 0xc001a805a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 17 12:49:30.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5f2n4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 12:49:30.497: INFO: rc: 1
Dec 17 12:49:30.498: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 17 12:49:30.498: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 17 12:49:30.556: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5f2n4
Dec 17 12:49:30.563: INFO: Scaling statefulset ss to 0
Dec 17 12:49:30.606: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 12:49:30.622: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:49:30.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-5f2n4" for this suite.
Dec 17 12:49:38.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:49:38.936: INFO: namespace: e2e-tests-statefulset-5f2n4, resource: bindings, ignored listing per whitelist
Dec 17 12:49:39.005: INFO: namespace e2e-tests-statefulset-5f2n4 deletion completed in 8.25997641s

• [SLOW TEST:401.377 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:49:39.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 17 12:49:39.206: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 12:49:39.269: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 12:49:39.275: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 17 12:49:39.297: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:49:39.297: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 17 12:49:39.297: INFO: 	Container coredns ready: true, restart count 0
Dec 17 12:49:39.297: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 17 12:49:39.297: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 12:49:39.297: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:49:39.297: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 17 12:49:39.297: INFO: 	Container weave ready: true, restart count 0
Dec 17 12:49:39.297: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 12:49:39.297: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 17 12:49:39.297: INFO: 	Container coredns ready: true, restart count 0
Dec 17 12:49:39.297: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:49:39.297: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b76146ff-20cb-11ea-a5ef-0242ac110004 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b76146ff-20cb-11ea-a5ef-0242ac110004 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b76146ff-20cb-11ea-a5ef-0242ac110004
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:50:03.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-ctr9x" for this suite.
Dec 17 12:50:15.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:50:16.261: INFO: namespace: e2e-tests-sched-pred-ctr9x, resource: bindings, ignored listing per whitelist
Dec 17 12:50:16.271: INFO: namespace e2e-tests-sched-pred-ctr9x deletion completed in 12.405086544s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.265 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:50:16.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nthh9;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nthh9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 110.179.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.179.110_udp@PTR;check="$$(dig +tcp +noall +answer +search 110.179.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.179.110_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nthh9;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nthh9;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nthh9.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nthh9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nthh9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 110.179.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.179.110_udp@PTR;check="$$(dig +tcp +noall +answer +search 110.179.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.179.110_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 12:50:32.832: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.837: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.845: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nthh9 from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.863: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9 from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.871: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.878: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.884: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.888: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.892: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.896: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.902: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.905: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.909: INFO: Unable to read 10.101.179.110_udp@PTR from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.912: INFO: Unable to read 10.101.179.110_tcp@PTR from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.915: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.918: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.921: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nthh9 from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.924: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nthh9 from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.927: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.931: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.937: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.940: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.944: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.947: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.950: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.953: INFO: Unable to read 10.101.179.110_udp@PTR from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.957: INFO: Unable to read 10.101.179.110_tcp@PTR from pod e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004: the server could not find the requested resource (get pods dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004)
Dec 17 12:50:32.957: INFO: Lookups using e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nthh9 wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9 wheezy_udp@dns-test-service.e2e-tests-dns-nthh9.svc wheezy_tcp@dns-test-service.e2e-tests-dns-nthh9.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.179.110_udp@PTR 10.101.179.110_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nthh9 jessie_tcp@dns-test-service.e2e-tests-dns-nthh9 jessie_udp@dns-test-service.e2e-tests-dns-nthh9.svc jessie_tcp@dns-test-service.e2e-tests-dns-nthh9.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nthh9.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nthh9.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.179.110_udp@PTR 10.101.179.110_tcp@PTR]

Dec 17 12:50:38.889: INFO: DNS probes using e2e-tests-dns-nthh9/dns-test-c65d451d-20cb-11ea-a5ef-0242ac110004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:50:39.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-nthh9" for this suite.
Dec 17 12:50:47.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:50:47.549: INFO: namespace: e2e-tests-dns-nthh9, resource: bindings, ignored listing per whitelist
Dec 17 12:50:47.597: INFO: namespace e2e-tests-dns-nthh9 deletion completed in 8.273883476s

• [SLOW TEST:31.325 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:50:47.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:51:00.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-hpz7j" for this suite.
Dec 17 12:51:25.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:51:25.259: INFO: namespace: e2e-tests-replication-controller-hpz7j, resource: bindings, ignored listing per whitelist
Dec 17 12:51:25.314: INFO: namespace e2e-tests-replication-controller-hpz7j deletion completed in 24.347182065s

• [SLOW TEST:37.716 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:51:25.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:51:25.567: INFO: Creating deployment "test-recreate-deployment"
Dec 17 12:51:25.603: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 17 12:51:25.622: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 17 12:51:27.904: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 17 12:51:28.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:51:30.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:51:32.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:51:34.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712183885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 12:51:36.334: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 17 12:51:36.364: INFO: Updating deployment test-recreate-deployment
Dec 17 12:51:36.364: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 17 12:51:37.067: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-kk7wm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kk7wm/deployments/test-recreate-deployment,UID:ef6f92f9-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126283,Generation:2,CreationTimestamp:2019-12-17 12:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-17 12:51:36 +0000 UTC 2019-12-17 12:51:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-17 12:51:36 +0000 UTC 2019-12-17 12:51:25 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 17 12:51:37.086: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-kk7wm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kk7wm/replicasets/test-recreate-deployment-589c4bfd,UID:f60dd4bd-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126282,Generation:1,CreationTimestamp:2019-12-17 12:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ef6f92f9-20cb-11ea-a994-fa163e34d433 0xc001c5b04f 0xc001c5b060}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 12:51:37.086: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 17 12:51:37.086: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-kk7wm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kk7wm/replicasets/test-recreate-deployment-5bf7f65dc,UID:ef7c159a-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126270,Generation:2,CreationTimestamp:2019-12-17 12:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ef6f92f9-20cb-11ea-a994-fa163e34d433 0xc001c5b150 0xc001c5b151}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 12:51:37.094: INFO: Pod "test-recreate-deployment-589c4bfd-cmwkv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-cmwkv,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-kk7wm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kk7wm/pods/test-recreate-deployment-589c4bfd-cmwkv,UID:f613c07a-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126281,Generation:0,CreationTimestamp:2019-12-17 12:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f60dd4bd-20cb-11ea-a994-fa163e34d433 0xc001fe3b6f 0xc001fe3b80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q2c7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2c7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q2c7s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fe3be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fe3c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:51:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:51:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:51:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:51:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 12:51:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:51:37.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-kk7wm" for this suite.
Dec 17 12:51:45.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:51:45.647: INFO: namespace: e2e-tests-deployment-kk7wm, resource: bindings, ignored listing per whitelist
Dec 17 12:51:45.872: INFO: namespace e2e-tests-deployment-kk7wm deletion completed in 8.768949172s

• [SLOW TEST:20.557 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:51:45.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 17 12:51:47.774: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4q9gk,SelfLink:/api/v1/namespaces/e2e-tests-watch-4q9gk/configmaps/e2e-watch-test-resource-version,UID:fc91be92-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126325,Generation:0,CreationTimestamp:2019-12-17 12:51:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 12:51:47.775: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4q9gk,SelfLink:/api/v1/namespaces/e2e-tests-watch-4q9gk/configmaps/e2e-watch-test-resource-version,UID:fc91be92-20cb-11ea-a994-fa163e34d433,ResourceVersion:15126326,Generation:0,CreationTimestamp:2019-12-17 12:51:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:51:47.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4q9gk" for this suite.
Dec 17 12:51:53.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:51:54.010: INFO: namespace: e2e-tests-watch-4q9gk, resource: bindings, ignored listing per whitelist
Dec 17 12:51:54.131: INFO: namespace e2e-tests-watch-4q9gk deletion completed in 6.344334508s

• [SLOW TEST:8.259 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:51:54.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-0092c7d1-20cc-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 12:51:54.371: INFO: Waiting up to 5m0s for pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-dh99h" to be "success or failure"
Dec 17 12:51:54.380: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.459445ms
Dec 17 12:51:56.399: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028536784s
Dec 17 12:51:58.415: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044480513s
Dec 17 12:52:02.750: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378648304s
Dec 17 12:52:04.769: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39780403s
Dec 17 12:52:06.810: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.43946931s
Dec 17 12:52:08.835: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.464057057s
STEP: Saw pod success
Dec 17 12:52:08.835: INFO: Pod "pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:52:08.840: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 17 12:52:08.947: INFO: Waiting for pod pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:52:08.959: INFO: Pod pod-secrets-0093e8b9-20cc-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:52:08.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dh99h" for this suite.
Dec 17 12:52:17.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:52:17.243: INFO: namespace: e2e-tests-secrets-dh99h, resource: bindings, ignored listing per whitelist
Dec 17 12:52:17.328: INFO: namespace e2e-tests-secrets-dh99h deletion completed in 8.359440704s

• [SLOW TEST:23.195 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:52:17.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 12:52:17.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-jpsvc" to be "success or failure"
Dec 17 12:52:17.604: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.682423ms
Dec 17 12:52:19.751: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163017895s
Dec 17 12:52:21.761: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173076307s
Dec 17 12:52:23.781: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193235975s
Dec 17 12:52:25.837: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249047718s
Dec 17 12:52:27.869: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.281340833s
Dec 17 12:52:29.893: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.305379817s
STEP: Saw pod success
Dec 17 12:52:29.893: INFO: Pod "downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:52:29.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 12:52:31.043: INFO: Waiting for pod downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:52:31.060: INFO: Pod downwardapi-volume-0e5fff01-20cc-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:52:31.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jpsvc" for this suite.
Dec 17 12:52:37.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:52:37.221: INFO: namespace: e2e-tests-projected-jpsvc, resource: bindings, ignored listing per whitelist
Dec 17 12:52:37.330: INFO: namespace e2e-tests-projected-jpsvc deletion completed in 6.26361213s

• [SLOW TEST:20.002 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:52:37.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-bthv
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 12:52:37.656: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bthv" in namespace "e2e-tests-subpath-g5sng" to be "success or failure"
Dec 17 12:52:37.686: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 29.988966ms
Dec 17 12:52:39.717: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06046202s
Dec 17 12:52:41.746: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08961819s
Dec 17 12:52:43.902: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245588724s
Dec 17 12:52:46.503: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.846316502s
Dec 17 12:52:48.557: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900452561s
Dec 17 12:52:51.077: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.420632457s
Dec 17 12:52:53.128: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.471898178s
Dec 17 12:52:55.150: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 17.493670436s
Dec 17 12:52:57.167: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 19.510130015s
Dec 17 12:52:59.188: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 21.531160446s
Dec 17 12:53:01.210: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 23.553567482s
Dec 17 12:53:03.238: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 25.58133065s
Dec 17 12:53:05.272: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 27.615340999s
Dec 17 12:53:07.280: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 29.624049452s
Dec 17 12:53:09.297: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 31.640411279s
Dec 17 12:53:11.316: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Running", Reason="", readiness=false. Elapsed: 33.659953685s
Dec 17 12:53:13.329: INFO: Pod "pod-subpath-test-configmap-bthv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.672609405s
STEP: Saw pod success
Dec 17 12:53:13.329: INFO: Pod "pod-subpath-test-configmap-bthv" satisfied condition "success or failure"
Dec 17 12:53:13.337: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-bthv container test-container-subpath-configmap-bthv: 
STEP: delete the pod
Dec 17 12:53:13.572: INFO: Waiting for pod pod-subpath-test-configmap-bthv to disappear
Dec 17 12:53:13.634: INFO: Pod pod-subpath-test-configmap-bthv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bthv
Dec 17 12:53:13.634: INFO: Deleting pod "pod-subpath-test-configmap-bthv" in namespace "e2e-tests-subpath-g5sng"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:53:14.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-g5sng" for this suite.
Dec 17 12:53:22.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:53:22.984: INFO: namespace: e2e-tests-subpath-g5sng, resource: bindings, ignored listing per whitelist
Dec 17 12:53:23.159: INFO: namespace e2e-tests-subpath-g5sng deletion completed in 8.633696745s

• [SLOW TEST:45.828 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:53:23.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-35aa8d94-20cc-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 12:53:23.483: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-xmxld" to be "success or failure"
Dec 17 12:53:23.497: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.64338ms
Dec 17 12:53:25.530: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047178888s
Dec 17 12:53:27.554: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070867868s
Dec 17 12:53:30.212: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.728917159s
Dec 17 12:53:32.226: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7427918s
Dec 17 12:53:34.289: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.805968477s
Dec 17 12:53:36.296: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.813342375s
STEP: Saw pod success
Dec 17 12:53:36.296: INFO: Pod "pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:53:36.300: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 12:53:37.479: INFO: Waiting for pod pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:53:38.405: INFO: Pod pod-projected-configmaps-35b42031-20cc-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:53:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xmxld" for this suite.
Dec 17 12:53:46.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:53:46.689: INFO: namespace: e2e-tests-projected-xmxld, resource: bindings, ignored listing per whitelist
Dec 17 12:53:46.770: INFO: namespace e2e-tests-projected-xmxld deletion completed in 8.187785868s

• [SLOW TEST:23.610 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:53:46.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:53:47.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-v9vdf" for this suite.
Dec 17 12:54:11.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:54:11.582: INFO: namespace: e2e-tests-pods-v9vdf, resource: bindings, ignored listing per whitelist
Dec 17 12:54:11.615: INFO: namespace e2e-tests-pods-v9vdf deletion completed in 24.492013104s

• [SLOW TEST:24.845 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:54:11.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 17 12:54:26.128: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-52963afa-20cc-11ea-a5ef-0242ac110004,GenerateName:,Namespace:e2e-tests-events-zpb6d,SelfLink:/api/v1/namespaces/e2e-tests-events-zpb6d/pods/send-events-52963afa-20cc-11ea-a5ef-0242ac110004,UID:529ac348-20cc-11ea-a994-fa163e34d433,ResourceVersion:15126658,Generation:0,CreationTimestamp:2019-12-17 12:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 921603082,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz8x4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz8x4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zz8x4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e63920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e63940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:54:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:54:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:54:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 12:54:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-17 12:54:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-17 12:54:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://20da703ab43efe7d9bd264b90aa63beb9f750d2a39e1e7beb04af1402e4f33e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 17 12:54:28.150: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 17 12:54:30.173: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:54:30.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-zpb6d" for this suite.
Dec 17 12:55:14.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:55:14.374: INFO: namespace: e2e-tests-events-zpb6d, resource: bindings, ignored listing per whitelist
Dec 17 12:55:14.416: INFO: namespace e2e-tests-events-zpb6d deletion completed in 44.205693091s

• [SLOW TEST:62.801 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:55:14.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 17 12:55:29.313: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:55:56.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-48f85" for this suite.
Dec 17 12:56:02.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:56:03.138: INFO: namespace: e2e-tests-namespaces-48f85, resource: bindings, ignored listing per whitelist
Dec 17 12:56:03.156: INFO: namespace e2e-tests-namespaces-48f85 deletion completed in 6.371243956s
STEP: Destroying namespace "e2e-tests-nsdeletetest-6m794" for this suite.
Dec 17 12:56:03.162: INFO: Namespace e2e-tests-nsdeletetest-6m794 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-sdq9x" for this suite.
Dec 17 12:56:09.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:56:09.315: INFO: namespace: e2e-tests-nsdeletetest-sdq9x, resource: bindings, ignored listing per whitelist
Dec 17 12:56:09.391: INFO: namespace e2e-tests-nsdeletetest-sdq9x deletion completed in 6.229206488s

• [SLOW TEST:54.975 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:56:09.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-98c6bd25-20cc-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 12:56:09.746: INFO: Waiting up to 5m0s for pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-94vmr" to be "success or failure"
Dec 17 12:56:09.766: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.152267ms
Dec 17 12:56:12.341: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.594430401s
Dec 17 12:56:14.372: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625925727s
Dec 17 12:56:16.399: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652937667s
Dec 17 12:56:18.433: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686374424s
Dec 17 12:56:20.505: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.758355456s
Dec 17 12:56:22.539: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.792459007s
Dec 17 12:56:24.740: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.993805517s
STEP: Saw pod success
Dec 17 12:56:24.740: INFO: Pod "pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:56:24.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 17 12:56:25.074: INFO: Waiting for pod pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:56:25.088: INFO: Pod pod-secrets-98c8caa9-20cc-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:56:25.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-94vmr" for this suite.
Dec 17 12:56:33.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:56:33.328: INFO: namespace: e2e-tests-secrets-94vmr, resource: bindings, ignored listing per whitelist
Dec 17 12:56:33.400: INFO: namespace e2e-tests-secrets-94vmr deletion completed in 8.280709549s

• [SLOW TEST:24.008 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:56:33.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 17 12:56:33.964: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:56:34.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dlwrq" for this suite.
Dec 17 12:56:40.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:56:40.764: INFO: namespace: e2e-tests-kubectl-dlwrq, resource: bindings, ignored listing per whitelist
Dec 17 12:56:40.785: INFO: namespace e2e-tests-kubectl-dlwrq deletion completed in 6.55482496s

• [SLOW TEST:7.384 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:56:40.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 12:56:41.112: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.563677ms)
Dec 17 12:56:41.118: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.997988ms)
Dec 17 12:56:41.124: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.497457ms)
Dec 17 12:56:41.132: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.720666ms)
Dec 17 12:56:41.138: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.491041ms)
Dec 17 12:56:41.144: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.194061ms)
Dec 17 12:56:41.151: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.797284ms)
Dec 17 12:56:41.158: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.931806ms)
Dec 17 12:56:41.164: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.595282ms)
Dec 17 12:56:41.169: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.817207ms)
Dec 17 12:56:41.175: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.567392ms)
Dec 17 12:56:41.236: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 61.221292ms)
Dec 17 12:56:41.259: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.596213ms)
Dec 17 12:56:41.283: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.884675ms)
Dec 17 12:56:41.295: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.530969ms)
Dec 17 12:56:41.303: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.757347ms)
Dec 17 12:56:41.318: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.629834ms)
Dec 17 12:56:41.330: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.067094ms)
Dec 17 12:56:41.384: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.02456ms)
Dec 17 12:56:41.400: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.71056ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:56:41.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kxmsc" for this suite.
Dec 17 12:56:47.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:56:47.788: INFO: namespace: e2e-tests-proxy-kxmsc, resource: bindings, ignored listing per whitelist
Dec 17 12:56:48.066: INFO: namespace e2e-tests-proxy-kxmsc deletion completed in 6.64006405s

• [SLOW TEST:7.281 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:56:48.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 12:56:48.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-nbcsw" to be "success or failure"
Dec 17 12:56:48.696: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.536854ms
Dec 17 12:56:51.544: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.854593079s
Dec 17 12:56:53.647: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.957666061s
Dec 17 12:56:55.666: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.976436998s
Dec 17 12:56:58.646: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.956930056s
Dec 17 12:57:00.701: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.011794097s
Dec 17 12:57:02.715: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026296969s
Dec 17 12:57:04.726: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037033836s
Dec 17 12:57:07.200: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.51086358s
STEP: Saw pod success
Dec 17 12:57:07.200: INFO: Pod "downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 12:57:07.237: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 12:57:07.840: INFO: Waiting for pod downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004 to disappear
Dec 17 12:57:07.862: INFO: Pod downwardapi-volume-b0030a7c-20cc-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:57:07.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nbcsw" for this suite.
Dec 17 12:57:14.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:57:14.181: INFO: namespace: e2e-tests-downward-api-nbcsw, resource: bindings, ignored listing per whitelist
Dec 17 12:57:14.355: INFO: namespace e2e-tests-downward-api-nbcsw deletion completed in 6.484125836s

• [SLOW TEST:26.288 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:57:14.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 17 12:57:14.951: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 12:57:14.977: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 12:57:14.981: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 17 12:57:15.225: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 17 12:57:15.225: INFO: 	Container coredns ready: true, restart count 0
Dec 17 12:57:15.225: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:57:15.225: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:57:15.225: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:57:15.225: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 17 12:57:15.225: INFO: 	Container coredns ready: true, restart count 0
Dec 17 12:57:15.225: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 17 12:57:15.225: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 12:57:15.225: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 17 12:57:15.225: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 17 12:57:15.225: INFO: 	Container weave ready: true, restart count 0
Dec 17 12:57:15.225: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.549: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.549: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.549: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.549: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.550: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.550: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.550: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 17 12:57:15.550: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c009bc0e-20cc-11ea-a5ef-0242ac110004.15e129eb6ee4b8c4], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-tp8h6/filler-pod-c009bc0e-20cc-11ea-a5ef-0242ac110004 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c009bc0e-20cc-11ea-a5ef-0242ac110004.15e129ecd600f020], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c009bc0e-20cc-11ea-a5ef-0242ac110004.15e129ed9c963688], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c009bc0e-20cc-11ea-a5ef-0242ac110004.15e129edbfa2b8d6], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e129ee3e17636a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:57:28.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tp8h6" for this suite.
Dec 17 12:57:37.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:57:37.745: INFO: namespace: e2e-tests-sched-pred-tp8h6, resource: bindings, ignored listing per whitelist
Dec 17 12:57:38.299: INFO: namespace e2e-tests-sched-pred-tp8h6 deletion completed in 9.355205543s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:23.943 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:57:38.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 17 12:57:55.917: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 12:57:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-lc4s5" for this suite.
Dec 17 12:58:33.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 12:58:34.234: INFO: namespace: e2e-tests-replicaset-lc4s5, resource: bindings, ignored listing per whitelist
Dec 17 12:58:34.248: INFO: namespace e2e-tests-replicaset-lc4s5 deletion completed in 37.124214244s

• [SLOW TEST:55.949 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 12:58:34.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fjg88
Dec 17 12:58:46.923: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fjg88
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 12:58:46.929: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:02:47.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fjg88" for this suite.
Dec 17 13:02:54.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:02:54.699: INFO: namespace: e2e-tests-container-probe-fjg88, resource: bindings, ignored listing per whitelist
Dec 17 13:02:54.723: INFO: namespace e2e-tests-container-probe-fjg88 deletion completed in 6.752749507s

• [SLOW TEST:260.474 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:02:54.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 13:02:54.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mtqb4'
Dec 17 13:02:57.151: INFO: stderr: ""
Dec 17 13:02:57.151: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 17 13:03:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mtqb4 -o json'
Dec 17 13:03:07.442: INFO: stderr: ""
Dec 17 13:03:07.442: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-17T13:02:57Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-mtqb4\",\n        \"resourceVersion\": \"15127499\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mtqb4/pods/e2e-test-nginx-pod\",\n        \"uid\": \"8b9c5e33-20cd-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-gxqpn\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-gxqpn\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-gxqpn\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T13:02:57Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T13:03:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T13:03:05Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T13:02:57Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://5db8c3e539636ac2b520d17b0c5315610c3aeddcbaa16e26f3e099de62f32c68\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-17T13:03:05Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-17T13:02:57Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 17 13:03:07.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mtqb4'
Dec 17 13:03:08.102: INFO: stderr: ""
Dec 17 13:03:08.102: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 17 13:03:08.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mtqb4'
Dec 17 13:03:19.678: INFO: stderr: ""
Dec 17 13:03:19.678: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:03:19.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mtqb4" for this suite.
Dec 17 13:03:25.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:03:25.880: INFO: namespace: e2e-tests-kubectl-mtqb4, resource: bindings, ignored listing per whitelist
Dec 17 13:03:25.947: INFO: namespace e2e-tests-kubectl-mtqb4 deletion completed in 6.23360413s

• [SLOW TEST:31.224 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:03:25.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 17 13:03:26.685: INFO: Waiting up to 5m0s for pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r" in namespace "e2e-tests-svcaccounts-xgslc" to be "success or failure"
Dec 17 13:03:26.716: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003191ms
Dec 17 13:03:28.729: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044446565s
Dec 17 13:03:30.765: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079923599s
Dec 17 13:03:33.328: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643188336s
Dec 17 13:03:35.358: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.673313004s
Dec 17 13:03:37.375: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.690392884s
Dec 17 13:03:39.392: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.706894386s
Dec 17 13:03:41.842: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.157272015s
Dec 17 13:03:44.236: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 17.551221974s
Dec 17 13:03:46.411: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Pending", Reason="", readiness=false. Elapsed: 19.72612677s
Dec 17 13:03:48.437: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.75178936s
STEP: Saw pod success
Dec 17 13:03:48.437: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r" satisfied condition "success or failure"
Dec 17 13:03:48.442: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r container token-test: 
STEP: delete the pod
Dec 17 13:03:48.583: INFO: Waiting for pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r to disappear
Dec 17 13:03:48.725: INFO: Pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-kdd9r no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 17 13:03:48.762: INFO: Waiting up to 5m0s for pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs" in namespace "e2e-tests-svcaccounts-xgslc" to be "success or failure"
Dec 17 13:03:48.836: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 74.450384ms
Dec 17 13:03:51.504: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.742114905s
Dec 17 13:03:53.516: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754475336s
Dec 17 13:03:55.542: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780238852s
Dec 17 13:03:57.561: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799366568s
Dec 17 13:03:59.582: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.820625976s
Dec 17 13:04:01.797: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 13.035455457s
Dec 17 13:04:03.830: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.068553562s
Dec 17 13:04:05.935: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 17.173731539s
Dec 17 13:04:07.953: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 19.191346711s
Dec 17 13:04:09.981: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Pending", Reason="", readiness=false. Elapsed: 21.219009655s
Dec 17 13:04:12.837: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075476643s
STEP: Saw pod success
Dec 17 13:04:12.837: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs" satisfied condition "success or failure"
Dec 17 13:04:12.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs container root-ca-test: 
STEP: delete the pod
Dec 17 13:04:14.070: INFO: Waiting for pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs to disappear
Dec 17 13:04:14.088: INFO: Pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nm7rs no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 17 13:04:14.128: INFO: Waiting up to 5m0s for pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf" in namespace "e2e-tests-svcaccounts-xgslc" to be "success or failure"
Dec 17 13:04:14.364: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 235.598212ms
Dec 17 13:04:16.722: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593698704s
Dec 17 13:04:18.734: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605619086s
Dec 17 13:04:21.006: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.87764028s
Dec 17 13:04:23.021: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.892717211s
Dec 17 13:04:25.035: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.90622051s
Dec 17 13:04:27.604: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.475039647s
Dec 17 13:04:30.414: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.285302625s
Dec 17 13:04:32.459: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.330843682s
Dec 17 13:04:34.745: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.616045982s
Dec 17 13:04:36.761: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.632913962s
STEP: Saw pod success
Dec 17 13:04:36.762: INFO: Pod "pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf" satisfied condition "success or failure"
Dec 17 13:04:36.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf container namespace-test: 
STEP: delete the pod
Dec 17 13:04:37.541: INFO: Waiting for pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf to disappear
Dec 17 13:04:37.552: INFO: Pod pod-service-account-9d3c8723-20cd-11ea-a5ef-0242ac110004-nhjtf no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:04:37.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-xgslc" for this suite.
Dec 17 13:04:45.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:04:45.852: INFO: namespace: e2e-tests-svcaccounts-xgslc, resource: bindings, ignored listing per whitelist
Dec 17 13:04:46.009: INFO: namespace e2e-tests-svcaccounts-xgslc deletion completed in 8.447620457s

• [SLOW TEST:80.062 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:04:46.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-c6x2
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 13:04:46.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c6x2" in namespace "e2e-tests-subpath-pjnqn" to be "success or failure"
Dec 17 13:04:46.291: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.975567ms
Dec 17 13:04:48.475: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199115523s
Dec 17 13:04:50.505: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229138019s
Dec 17 13:04:52.805: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5287682s
Dec 17 13:04:54.819: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542712529s
Dec 17 13:04:56.880: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604232871s
Dec 17 13:04:58.904: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.627416465s
Dec 17 13:05:00.924: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.648140289s
Dec 17 13:05:02.946: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.669973639s
Dec 17 13:05:04.962: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.685711919s
Dec 17 13:05:06.975: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 20.699169603s
Dec 17 13:05:08.992: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 22.715957679s
Dec 17 13:05:11.011: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 24.734622757s
Dec 17 13:05:13.019: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 26.743257154s
Dec 17 13:05:15.036: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 28.760260027s
Dec 17 13:05:17.062: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 30.785745354s
Dec 17 13:05:19.073: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Running", Reason="", readiness=false. Elapsed: 32.797110957s
Dec 17 13:05:21.090: INFO: Pod "pod-subpath-test-downwardapi-c6x2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.813513666s
STEP: Saw pod success
Dec 17 13:05:21.090: INFO: Pod "pod-subpath-test-downwardapi-c6x2" satisfied condition "success or failure"
Dec 17 13:05:21.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-c6x2 container test-container-subpath-downwardapi-c6x2: 
STEP: delete the pod
Dec 17 13:05:21.274: INFO: Waiting for pod pod-subpath-test-downwardapi-c6x2 to disappear
Dec 17 13:05:21.280: INFO: Pod pod-subpath-test-downwardapi-c6x2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-c6x2
Dec 17 13:05:21.281: INFO: Deleting pod "pod-subpath-test-downwardapi-c6x2" in namespace "e2e-tests-subpath-pjnqn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:05:21.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-pjnqn" for this suite.
Dec 17 13:05:27.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:05:27.663: INFO: namespace: e2e-tests-subpath-pjnqn, resource: bindings, ignored listing per whitelist
Dec 17 13:05:27.668: INFO: namespace e2e-tests-subpath-pjnqn deletion completed in 6.362797652s

• [SLOW TEST:41.659 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:05:27.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 17 13:05:28.067: INFO: Waiting up to 5m0s for pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-wkqkx" to be "success or failure"
Dec 17 13:05:28.079: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.846251ms
Dec 17 13:05:30.456: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389093898s
Dec 17 13:05:32.479: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411668167s
Dec 17 13:05:35.482: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.414876177s
Dec 17 13:05:37.516: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.448877715s
Dec 17 13:05:39.786: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.719300195s
Dec 17 13:05:41.804: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.736993501s
STEP: Saw pod success
Dec 17 13:05:41.804: INFO: Pod "pod-e596c5fa-20cd-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:05:41.807: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e596c5fa-20cd-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 13:05:43.810: INFO: Waiting for pod pod-e596c5fa-20cd-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:05:43.841: INFO: Pod pod-e596c5fa-20cd-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:05:43.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wkqkx" for this suite.
Dec 17 13:05:49.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:05:50.183: INFO: namespace: e2e-tests-emptydir-wkqkx, resource: bindings, ignored listing per whitelist
Dec 17 13:05:50.225: INFO: namespace e2e-tests-emptydir-wkqkx deletion completed in 6.366184893s

• [SLOW TEST:22.556 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:05:50.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 17 13:05:51.294: INFO: created pod pod-service-account-defaultsa
Dec 17 13:05:51.294: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 17 13:05:51.353: INFO: created pod pod-service-account-mountsa
Dec 17 13:05:51.353: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 17 13:05:51.521: INFO: created pod pod-service-account-nomountsa
Dec 17 13:05:51.521: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 17 13:05:51.543: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 17 13:05:51.544: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 17 13:05:51.577: INFO: created pod pod-service-account-mountsa-mountspec
Dec 17 13:05:51.577: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 17 13:05:51.727: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 17 13:05:51.727: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 17 13:05:51.740: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 17 13:05:51.740: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 17 13:05:51.764: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 17 13:05:51.764: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 17 13:05:52.059: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 17 13:05:52.059: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:05:52.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-7stvp" for this suite.
Dec 17 13:06:24.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:06:24.852: INFO: namespace: e2e-tests-svcaccounts-7stvp, resource: bindings, ignored listing per whitelist
Dec 17 13:06:25.060: INFO: namespace e2e-tests-svcaccounts-7stvp deletion completed in 32.977416833s

• [SLOW TEST:34.835 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:06:25.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 13:06:25.508: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 17 13:06:25.640: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hzm4n/daemonsets","resourceVersion":"15127993"},"items":null}

Dec 17 13:06:25.649: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hzm4n/pods","resourceVersion":"15127993"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:06:25.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hzm4n" for this suite.
Dec 17 13:06:31.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:06:31.918: INFO: namespace: e2e-tests-daemonsets-hzm4n, resource: bindings, ignored listing per whitelist
Dec 17 13:06:31.928: INFO: namespace e2e-tests-daemonsets-hzm4n deletion completed in 6.247002262s

S [SKIPPING] [6.867 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 17 13:06:25.508: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:06:31.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 13:06:32.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-rvlw2" to be "success or failure"
Dec 17 13:06:32.177: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.769283ms
Dec 17 13:06:34.581: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423040078s
Dec 17 13:06:36.599: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440731919s
Dec 17 13:06:39.209: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.051044876s
Dec 17 13:06:41.224: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.065594632s
Dec 17 13:06:43.239: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.080560928s
STEP: Saw pod success
Dec 17 13:06:43.239: INFO: Pod "downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:06:43.249: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 13:06:43.447: INFO: Waiting for pod downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:06:43.462: INFO: Pod downwardapi-volume-0bca8b11-20ce-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:06:43.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rvlw2" for this suite.
Dec 17 13:06:49.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:06:49.691: INFO: namespace: e2e-tests-projected-rvlw2, resource: bindings, ignored listing per whitelist
Dec 17 13:06:49.791: INFO: namespace e2e-tests-projected-rvlw2 deletion completed in 6.320520914s

• [SLOW TEST:17.862 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:06:49.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-1674a398-20ce-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 13:06:50.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-2qq48" to be "success or failure"
Dec 17 13:06:50.073: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.908564ms
Dec 17 13:06:52.248: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184889618s
Dec 17 13:06:54.279: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215772479s
Dec 17 13:06:56.690: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626906033s
Dec 17 13:06:58.714: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650950572s
Dec 17 13:07:00.731: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668474615s
Dec 17 13:07:02.758: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.695254737s
STEP: Saw pod success
Dec 17 13:07:02.758: INFO: Pod "pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:07:02.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 17 13:07:03.080: INFO: Waiting for pod pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:07:03.917: INFO: Pod pod-configmaps-1676e2b8-20ce-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:07:03.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2qq48" for this suite.
Dec 17 13:07:10.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:07:10.884: INFO: namespace: e2e-tests-configmap-2qq48, resource: bindings, ignored listing per whitelist
Dec 17 13:07:10.996: INFO: namespace e2e-tests-configmap-2qq48 deletion completed in 7.070817811s

• [SLOW TEST:21.205 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:07:10.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-85ncv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 13:07:11.173: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 13:07:47.628: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-85ncv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 13:07:47.628: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 13:07:48.051: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:07:48.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-85ncv" for this suite.
Dec 17 13:08:12.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:08:12.317: INFO: namespace: e2e-tests-pod-network-test-85ncv, resource: bindings, ignored listing per whitelist
Dec 17 13:08:12.386: INFO: namespace e2e-tests-pod-network-test-85ncv deletion completed in 24.299885807s

• [SLOW TEST:61.389 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:08:12.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-hfv87/secret-test-47be0d01-20ce-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 13:08:12.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004" in namespace "e2e-tests-secrets-hfv87" to be "success or failure"
Dec 17 13:08:12.879: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 112.184293ms
Dec 17 13:08:14.894: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127045544s
Dec 17 13:08:16.971: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20466024s
Dec 17 13:08:19.065: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297927398s
Dec 17 13:08:22.017: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.250457915s
Dec 17 13:08:24.056: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.289298335s
Dec 17 13:08:26.072: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.305375706s
Dec 17 13:08:28.100: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332764987s
Dec 17 13:08:30.121: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.354347312s
STEP: Saw pod success
Dec 17 13:08:30.121: INFO: Pod "pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:08:30.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004 container env-test: 
STEP: delete the pod
Dec 17 13:08:31.078: INFO: Waiting for pod pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:08:31.252: INFO: Pod pod-configmaps-47bf69d4-20ce-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:08:31.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hfv87" for this suite.
Dec 17 13:08:39.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:08:39.519: INFO: namespace: e2e-tests-secrets-hfv87, resource: bindings, ignored listing per whitelist
Dec 17 13:08:39.602: INFO: namespace e2e-tests-secrets-hfv87 deletion completed in 8.308988691s

• [SLOW TEST:27.216 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:08:39.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-djsxq
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-djsxq
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-djsxq
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-djsxq
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-djsxq
Dec 17 13:08:58.233: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-djsxq, name: ss-0, uid: 628ef17b-20ce-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 17 13:09:02.473: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-djsxq, name: ss-0, uid: 628ef17b-20ce-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 13:09:02.613: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-djsxq, name: ss-0, uid: 628ef17b-20ce-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 13:09:02.630: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-djsxq
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-djsxq
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-djsxq and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 17 13:09:22.124: INFO: Deleting all statefulset in ns e2e-tests-statefulset-djsxq
Dec 17 13:09:22.132: INFO: Scaling statefulset ss to 0
Dec 17 13:09:42.239: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 13:09:42.245: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:09:42.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-djsxq" for this suite.
Dec 17 13:09:50.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:09:50.656: INFO: namespace: e2e-tests-statefulset-djsxq, resource: bindings, ignored listing per whitelist
Dec 17 13:09:50.738: INFO: namespace e2e-tests-statefulset-djsxq deletion completed in 8.406130945s

• [SLOW TEST:71.135 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:09:50.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004
Dec 17 13:09:51.108: INFO: Pod name my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004: Found 0 pods out of 1
Dec 17 13:09:56.133: INFO: Pod name my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004: Found 1 pods out of 1
Dec 17 13:09:56.133: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004" are running
Dec 17 13:10:02.162: INFO: Pod "my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004-gcx99" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 13:09:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 13:09:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 13:09:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 13:09:51 +0000 UTC Reason: Message:}])
Dec 17 13:10:02.162: INFO: Trying to dial the pod
Dec 17 13:10:07.219: INFO: Controller my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004: Got expected result from replica 1 [my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004-gcx99]: "my-hostname-basic-825e7bd5-20ce-11ea-a5ef-0242ac110004-gcx99", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:10:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-t442k" for this suite.
Dec 17 13:10:13.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:10:13.330: INFO: namespace: e2e-tests-replication-controller-t442k, resource: bindings, ignored listing per whitelist
Dec 17 13:10:13.426: INFO: namespace e2e-tests-replication-controller-t442k deletion completed in 6.193786112s

• [SLOW TEST:22.687 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:10:13.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 17 13:10:39.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:39.993: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:41.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:42.013: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:43.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:44.020: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:45.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:46.020: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:47.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:48.008: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:49.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:50.010: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:51.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:52.027: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:53.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:54.018: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:55.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:56.018: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:57.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:10:58.010: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:10:59.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:11:00.004: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:11:01.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:11:02.011: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 13:11:03.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 13:11:04.016: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:11:04.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4vnxf" for this suite.
Dec 17 13:11:28.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:11:28.230: INFO: namespace: e2e-tests-container-lifecycle-hook-4vnxf, resource: bindings, ignored listing per whitelist
Dec 17 13:11:28.533: INFO: namespace e2e-tests-container-lifecycle-hook-4vnxf deletion completed in 24.415249045s

• [SLOW TEST:75.107 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:11:28.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:11:41.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-kln7c" for this suite.
Dec 17 13:11:49.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:11:49.276: INFO: namespace: e2e-tests-kubelet-test-kln7c, resource: bindings, ignored listing per whitelist
Dec 17 13:11:49.360: INFO: namespace e2e-tests-kubelet-test-kln7c deletion completed in 8.205426089s

• [SLOW TEST:20.826 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:11:49.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 17 13:11:49.665: INFO: Waiting up to 5m0s for pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004" in namespace "e2e-tests-containers-86k8t" to be "success or failure"
Dec 17 13:11:49.674: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.37814ms
Dec 17 13:11:51.953: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28835065s
Dec 17 13:11:53.998: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33284094s
Dec 17 13:11:56.602: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.93770077s
Dec 17 13:11:58.617: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.951882251s
Dec 17 13:12:00.677: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.012118915s
Dec 17 13:12:02.698: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.033576475s
STEP: Saw pod success
Dec 17 13:12:02.698: INFO: Pod "client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:12:02.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 13:12:02.991: INFO: Waiting for pod client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:12:03.002: INFO: Pod client-containers-c909d1a6-20ce-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:12:03.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-86k8t" for this suite.
Dec 17 13:12:09.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:12:09.214: INFO: namespace: e2e-tests-containers-86k8t, resource: bindings, ignored listing per whitelist
Dec 17 13:12:09.286: INFO: namespace e2e-tests-containers-86k8t deletion completed in 6.277255597s

• [SLOW TEST:19.926 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:12:09.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 13:12:09.621: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 17 13:12:14.644: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 13:12:20.671: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 17 13:12:22.690: INFO: Creating deployment "test-rollover-deployment"
Dec 17 13:12:22.746: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 17 13:12:24.766: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 17 13:12:24.786: INFO: Ensure that both replica sets have 1 created replica
Dec 17 13:12:24.793: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 17 13:12:24.809: INFO: Updating deployment test-rollover-deployment
Dec 17 13:12:24.809: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 17 13:12:27.128: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 17 13:12:27.176: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 17 13:12:27.230: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:27.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:29.809: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:29.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:31.444: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:31.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:33.308: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:33.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:36.097: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:36.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:37.262: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:37.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:39.278: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:39.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185146, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:41.289: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:41.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185160, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:43.253: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:43.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185160, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:45.261: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:45.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185160, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:47.259: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:47.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185160, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:49.313: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 13:12:49.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185160, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185142, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 13:12:51.687: INFO: 
Dec 17 13:12:51.687: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 17 13:12:51.702: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2lv6q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2lv6q/deployments/test-rollover-deployment,UID:dcbd44c6-20ce-11ea-a994-fa163e34d433,ResourceVersion:15128906,Generation:2,CreationTimestamp:2019-12-17 13:12:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-17 13:12:22 +0000 UTC 2019-12-17 13:12:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-17 13:12:50 +0000 UTC 2019-12-17 13:12:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 17 13:12:51.707: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2lv6q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2lv6q/replicasets/test-rollover-deployment-5b8479fdb6,UID:de00199d-20ce-11ea-a994-fa163e34d433,ResourceVersion:15128896,Generation:2,CreationTimestamp:2019-12-17 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment dcbd44c6-20ce-11ea-a994-fa163e34d433 0xc0020117f7 0xc0020117f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 17 13:12:51.707: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 17 13:12:51.707: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2lv6q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2lv6q/replicasets/test-rollover-controller,UID:d4eda2f5-20ce-11ea-a994-fa163e34d433,ResourceVersion:15128905,Generation:2,CreationTimestamp:2019-12-17 13:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment dcbd44c6-20ce-11ea-a994-fa163e34d433 0xc00201149f 0xc002011510}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 13:12:51.707: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2lv6q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2lv6q/replicasets/test-rollover-deployment-58494b7559,UID:dcc852ba-20ce-11ea-a994-fa163e34d433,ResourceVersion:15128854,Generation:2,CreationTimestamp:2019-12-17 13:12:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment dcbd44c6-20ce-11ea-a994-fa163e34d433 0xc002011707 0xc002011708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 13:12:51.712: INFO: Pod "test-rollover-deployment-5b8479fdb6-txvc2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-txvc2,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2lv6q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2lv6q/pods/test-rollover-deployment-5b8479fdb6-txvc2,UID:de9df595-20ce-11ea-a994-fa163e34d433,ResourceVersion:15128881,Generation:0,CreationTimestamp:2019-12-17 13:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 de00199d-20ce-11ea-a994-fa163e34d433 0xc001827227 0xc001827228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rjdkw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rjdkw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rjdkw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001827290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018272b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:12:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:12:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:12:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:12:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-17 13:12:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-17 13:12:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://cafd92ab33a429f41dc9aa22014ab6a1f8454b35a6b70bb15f8d9702615989dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:12:51.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2lv6q" for this suite.
Dec 17 13:13:02.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:13:02.289: INFO: namespace: e2e-tests-deployment-2lv6q, resource: bindings, ignored listing per whitelist
Dec 17 13:13:02.453: INFO: namespace e2e-tests-deployment-2lv6q deletion completed in 10.733303219s

• [SLOW TEST:53.167 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:13:02.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 17 13:13:02.741: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:13:22.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-mnhrx" for this suite.
Dec 17 13:13:30.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:13:30.390: INFO: namespace: e2e-tests-init-container-mnhrx, resource: bindings, ignored listing per whitelist
Dec 17 13:13:30.760: INFO: namespace e2e-tests-init-container-mnhrx deletion completed in 8.450890243s

• [SLOW TEST:28.306 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:13:30.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 13:13:31.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-k65jp" to be "success or failure"
Dec 17 13:13:31.199: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.952169ms
Dec 17 13:13:33.212: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034319393s
Dec 17 13:13:35.223: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045039533s
Dec 17 13:13:37.910: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73272926s
Dec 17 13:13:40.026: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848235781s
Dec 17 13:13:42.055: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.877640232s
Dec 17 13:13:44.072: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.894532294s
STEP: Saw pod success
Dec 17 13:13:44.072: INFO: Pod "downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:13:44.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 13:13:44.685: INFO: Waiting for pod downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:13:44.778: INFO: Pod downwardapi-volume-058b841e-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:13:44.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k65jp" for this suite.
Dec 17 13:13:50.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:13:51.032: INFO: namespace: e2e-tests-downward-api-k65jp, resource: bindings, ignored listing per whitelist
Dec 17 13:13:51.150: INFO: namespace e2e-tests-downward-api-k65jp deletion completed in 6.350096484s

• [SLOW TEST:20.389 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:13:51.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 17 13:13:51.405: INFO: Creating deployment "nginx-deployment"
Dec 17 13:13:51.416: INFO: Waiting for observed generation 1
Dec 17 13:13:54.174: INFO: Waiting for all required pods to come up
Dec 17 13:13:54.555: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 17 13:14:40.628: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 17 13:14:40.689: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 17 13:14:40.713: INFO: Updating deployment nginx-deployment
Dec 17 13:14:40.713: INFO: Waiting for observed generation 2
Dec 17 13:14:42.739: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 17 13:14:43.862: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 17 13:14:43.909: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 17 13:14:45.605: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 17 13:14:45.605: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 17 13:14:46.384: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 17 13:14:46.741: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 17 13:14:46.741: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 17 13:14:47.011: INFO: Updating deployment nginx-deployment
Dec 17 13:14:47.011: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 17 13:14:47.379: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 17 13:14:51.308: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 17 13:14:55.541: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gxcbc/deployments/nginx-deployment,UID:119da9ec-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129363,Generation:3,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-17 13:14:47 +0000 UTC 2019-12-17 13:14:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-17 13:14:52 +0000 UTC 2019-12-17 13:13:51 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 17 13:14:56.614: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gxcbc/replicasets/nginx-deployment-5c98f8fb5,UID:2f0249dc-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129357,Generation:3,CreationTimestamp:2019-12-17 13:14:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 119da9ec-20cf-11ea-a994-fa163e34d433 0xc00228aa97 0xc00228aa98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 13:14:56.615: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 17 13:14:56.615: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gxcbc/replicasets/nginx-deployment-85ddf47c5d,UID:11a6c5a8-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129358,Generation:3,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 119da9ec-20cf-11ea-a994-fa163e34d433 0xc00228ab57 0xc00228ab58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 17 13:14:57.958: INFO: Pod "nginx-deployment-5c98f8fb5-4rlbq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4rlbq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-4rlbq,UID:34f02bfd-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129352,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986477 0xc002986478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029864f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029865a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.958: INFO: Pod "nginx-deployment-5c98f8fb5-528nq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-528nq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-528nq,UID:3463ae09-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129341,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc0029866d7 0xc0029866d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029867d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029867f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-5b8qx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5b8qx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-5b8qx,UID:3463908a-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129343,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986867 0xc002986868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-5kcx4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5kcx4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-5kcx4,UID:2f08ae63-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129288,Generation:0,CreationTimestamp:2019-12-17 13:14:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc0029869d7 0xc0029869d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-7vcfw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7vcfw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-7vcfw,UID:2f1b6f4d-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129295,Generation:0,CreationTimestamp:2019-12-17 13:14:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986b37 0xc002986b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-8jr8b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8jr8b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-8jr8b,UID:2fb189a4-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129302,Generation:0,CreationTimestamp:2019-12-17 13:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986c87 0xc002986c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-bm89c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bm89c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-bm89c,UID:334e94ac-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129315,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986dd7 0xc002986dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-gdgrc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gdgrc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-gdgrc,UID:3463b538-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129342,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986ed7 0xc002986ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.959: INFO: Pod "nginx-deployment-5c98f8fb5-hh55b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hh55b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-hh55b,UID:3350003e-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129318,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002986fd7 0xc002986fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-5c98f8fb5-mf7lh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mf7lh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-mf7lh,UID:2f1bf6a0-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129289,Generation:0,CreationTimestamp:2019-12-17 13:14:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc0029870d7 0xc0029870d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-5c98f8fb5-mm4lw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mm4lw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-mm4lw,UID:33020e9a-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129356,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002987227 0xc002987228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029872b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-5c98f8fb5-pdgrj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pdgrj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-pdgrj,UID:2f844f4d-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129293,Generation:0,CreationTimestamp:2019-12-17 13:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc002987377 0xc002987378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029873e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-5c98f8fb5-qggk5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qggk5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-5c98f8fb5-qggk5,UID:34639796-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129338,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2f0249dc-20cf-11ea-a994-fa163e34d433 0xc0029874c7 0xc0029874c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-85ddf47c5d-484vn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-484vn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-484vn,UID:11bd602c-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129217,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0029875c7 0xc0029875c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-17 13:13:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7f4e328a72312d92519a57e4b3b27edd911fd6b7990893c21b59fadf87516555}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-85ddf47c5d-4zbf9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4zbf9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-4zbf9,UID:346361ba-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129344,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987717 0xc002987718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029877a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.960: INFO: Pod "nginx-deployment-85ddf47c5d-652wg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-652wg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-652wg,UID:34ef70ac-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129349,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987817 0xc002987818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029878a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-6sr6z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6sr6z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-6sr6z,UID:11ec5730-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129228,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987917 0xc002987918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029879a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-17 13:13:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e12488b82469e7e1ba7407986fc7144ff252d342af866058fb68d97f6a70575}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-7gtlf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7gtlf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-7gtlf,UID:34effdef-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129354,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987a67 0xc002987a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-7qvdt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qvdt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-7qvdt,UID:11bf2dc5-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129196,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987b67 0xc002987b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-17 13:13:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee70982546e2313588bee1c67393b88c72cb21f16df2a18b2dd9b1bb34f5ef65}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-8vg7q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8vg7q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-8vg7q,UID:346359bc-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129340,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987cb7 0xc002987cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-9h6lt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9h6lt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-9h6lt,UID:11c5fb4c-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129207,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987db7 0xc002987db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-17 13:13:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://66813c92e8d1111849659af4bf230efd83d38c4d5a69a69f13bf871f3d5c355a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-c7nz4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c7nz4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-c7nz4,UID:335028d0-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129317,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc002987f07 0xc002987f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002987f70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002987f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.961: INFO: Pod "nginx-deployment-85ddf47c5d-cks4l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cks4l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-cks4l,UID:34efdafa-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129351,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028dc517 0xc0028dc518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028dc5c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028dc5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-f8c2x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f8c2x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-f8c2x,UID:11c52aa5-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129225,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028dc6f7 0xc0028dc6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028dc9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028dcdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-17 13:13:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b8b7f057e89688cf33577fa1aab155a6f46e6de5ed19391bf136c4cdd9e69a66}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-fwtck" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fwtck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-fwtck,UID:34f09963-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129355,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028dd187 0xc0028dd188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028dd5b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028dd5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-gndp6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gndp6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-gndp6,UID:32fda1e9-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129370,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028dd8c7 0xc0028dd8c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028dd930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028dd950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-17 13:14:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-hlnzq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hlnzq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-hlnzq,UID:334fc74b-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129316,Generation:0,CreationTimestamp:2019-12-17 13:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028ddd07 0xc0028ddd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028dde30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028dde50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-jpcvt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jpcvt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-jpcvt,UID:34ef4aed-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129350,Generation:0,CreationTimestamp:2019-12-17 13:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0028ddfe7 0xc0028ddfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a2050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-s4ggs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s4ggs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-s4ggs,UID:34630aec-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129331,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0023a2107 0xc0023a2108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a21f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.962: INFO: Pod "nginx-deployment-85ddf47c5d-sf4x2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sf4x2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-sf4x2,UID:11c3e90b-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129211,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0023a2297 0xc0023a2298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a2360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-17 13:13:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://80e08e7dbcbad62bc9051a4f2dad1c2972f1ecdd847b31c606084a78761b21c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.963: INFO: Pod "nginx-deployment-85ddf47c5d-sqtmm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sqtmm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-sqtmm,UID:34638176-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129336,Generation:0,CreationTimestamp:2019-12-17 13:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0023a2487 0xc0023a2488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a24f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.963: INFO: Pod "nginx-deployment-85ddf47c5d-t7krt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t7krt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-t7krt,UID:11c0a9f0-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129233,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0023a2587 0xc0023a2588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a2600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-17 13:13:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://07d918532546510dad190f123e83584274691f78cf639c2beb69392ec2d21fe8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 17 13:14:57.963: INFO: Pod "nginx-deployment-85ddf47c5d-tglxj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tglxj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gxcbc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gxcbc/pods/nginx-deployment-85ddf47c5d-tglxj,UID:11c610b6-20cf-11ea-a994-fa163e34d433,ResourceVersion:15129214,Generation:0,CreationTimestamp:2019-12-17 13:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 11a6c5a8-20cf-11ea-a994-fa163e34d433 0xc0023a26e7 0xc0023a26e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6t5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6t5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6t5g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023a2750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023a2770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:14:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:13:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-17 13:13:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:14:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7073960d5ae1a001dc46f4b09ef0c8fce72f9c5ca1256b2b61d1750dc9ff2254}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:14:57.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gxcbc" for this suite.
Dec 17 13:16:42.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:16:43.446: INFO: namespace: e2e-tests-deployment-gxcbc, resource: bindings, ignored listing per whitelist
Dec 17 13:16:43.534: INFO: namespace e2e-tests-deployment-gxcbc deletion completed in 1m44.81318622s

• [SLOW TEST:172.384 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:16:43.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 17 13:16:46.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-jnllq" to be "success or failure"
Dec 17 13:16:46.436: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 121.298466ms
Dec 17 13:16:48.696: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381237284s
Dec 17 13:16:50.912: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597630981s
Dec 17 13:16:52.953: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638593018s
Dec 17 13:16:54.975: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66049908s
Dec 17 13:16:57.064: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.749032964s
Dec 17 13:16:59.380: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.064717925s
Dec 17 13:17:01.754: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.439321833s
Dec 17 13:17:03.977: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.661991499s
Dec 17 13:17:06.797: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.482440189s
Dec 17 13:17:10.445: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.130062143s
Dec 17 13:17:12.460: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.144776701s
Dec 17 13:17:16.955: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.640193151s
Dec 17 13:17:19.111: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.79607192s
Dec 17 13:17:21.180: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 34.86544583s
Dec 17 13:17:23.897: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.582520841s
Dec 17 13:17:25.926: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.611424943s
Dec 17 13:17:27.961: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 41.646383431s
Dec 17 13:17:30.239: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 43.924340901s
Dec 17 13:17:33.825: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 47.509688256s
STEP: Saw pod success
Dec 17 13:17:33.825: INFO: Pod "downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:17:33.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004 container client-container: 
STEP: delete the pod
Dec 17 13:17:34.878: INFO: Waiting for pod downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:17:34.905: INFO: Pod downwardapi-volume-79cea400-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:17:34.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jnllq" for this suite.
Dec 17 13:17:43.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:17:43.266: INFO: namespace: e2e-tests-downward-api-jnllq, resource: bindings, ignored listing per whitelist
Dec 17 13:17:43.489: INFO: namespace e2e-tests-downward-api-jnllq deletion completed in 8.5594699s

• [SLOW TEST:59.954 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:17:43.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 17 13:17:43.887: INFO: Waiting up to 5m0s for pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-containers-m2fk2" to be "success or failure"
Dec 17 13:17:43.928: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 40.44112ms
Dec 17 13:17:45.940: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052589751s
Dec 17 13:17:48.033: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145459812s
Dec 17 13:17:50.179: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291573892s
Dec 17 13:17:52.217: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329865719s
Dec 17 13:17:54.293: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406084065s
Dec 17 13:17:56.314: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.427165306s
Dec 17 13:17:58.340: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.452779928s
STEP: Saw pod success
Dec 17 13:17:58.340: INFO: Pod "client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:17:58.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 13:17:58.559: INFO: Waiting for pod client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:17:58.596: INFO: Pod client-containers-9c15d84f-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:17:58.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-m2fk2" for this suite.
Dec 17 13:18:06.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:18:06.995: INFO: namespace: e2e-tests-containers-m2fk2, resource: bindings, ignored listing per whitelist
Dec 17 13:18:07.020: INFO: namespace e2e-tests-containers-m2fk2 deletion completed in 8.399061265s

• [SLOW TEST:23.531 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:18:07.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-aa464e68-20cf-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 17 13:18:07.553: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-configmap-gxrbz" to be "success or failure"
Dec 17 13:18:07.585: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.301496ms
Dec 17 13:18:09.597: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043849686s
Dec 17 13:18:11.619: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065965489s
Dec 17 13:18:13.635: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082638404s
Dec 17 13:18:15.751: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197998847s
Dec 17 13:18:17.759: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206330865s
Dec 17 13:18:19.769: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.216002726s
STEP: Saw pod success
Dec 17 13:18:19.769: INFO: Pod "pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:18:19.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 17 13:18:20.694: INFO: Waiting for pod pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:18:20.924: INFO: Pod pod-configmaps-aa47e343-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:18:20.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gxrbz" for this suite.
Dec 17 13:18:27.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:18:27.371: INFO: namespace: e2e-tests-configmap-gxrbz, resource: bindings, ignored listing per whitelist
Dec 17 13:18:27.426: INFO: namespace e2e-tests-configmap-gxrbz deletion completed in 6.48088645s

• [SLOW TEST:20.405 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:18:27.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-b651ee3f-20cf-11ea-a5ef-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 17 13:18:27.993: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-29wmm" to be "success or failure"
Dec 17 13:18:28.011: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.90394ms
Dec 17 13:18:30.113: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119515919s
Dec 17 13:18:32.126: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132549987s
Dec 17 13:18:34.435: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441865911s
Dec 17 13:18:36.769: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775708621s
Dec 17 13:18:38.784: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.790998247s
STEP: Saw pod success
Dec 17 13:18:38.784: INFO: Pod "pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:18:38.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 13:18:39.236: INFO: Waiting for pod pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:18:39.248: INFO: Pod pod-projected-secrets-b66ed7fe-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:18:39.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-29wmm" for this suite.
Dec 17 13:18:45.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:18:45.519: INFO: namespace: e2e-tests-projected-29wmm, resource: bindings, ignored listing per whitelist
Dec 17 13:18:45.546: INFO: namespace e2e-tests-projected-29wmm deletion completed in 6.222384807s

• [SLOW TEST:18.120 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:18:45.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 13:18:45.876: INFO: Number of nodes with available pods: 0
Dec 17 13:18:45.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:46.902: INFO: Number of nodes with available pods: 0
Dec 17 13:18:46.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:48.254: INFO: Number of nodes with available pods: 0
Dec 17 13:18:48.254: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:48.927: INFO: Number of nodes with available pods: 0
Dec 17 13:18:48.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:50.012: INFO: Number of nodes with available pods: 0
Dec 17 13:18:50.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:50.927: INFO: Number of nodes with available pods: 0
Dec 17 13:18:50.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:52.371: INFO: Number of nodes with available pods: 0
Dec 17 13:18:52.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:53.293: INFO: Number of nodes with available pods: 0
Dec 17 13:18:53.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:53.926: INFO: Number of nodes with available pods: 0
Dec 17 13:18:53.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:54.912: INFO: Number of nodes with available pods: 0
Dec 17 13:18:54.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:55.906: INFO: Number of nodes with available pods: 1
Dec 17 13:18:55.906: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 17 13:18:56.007: INFO: Number of nodes with available pods: 0
Dec 17 13:18:56.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:57.035: INFO: Number of nodes with available pods: 0
Dec 17 13:18:57.035: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:58.057: INFO: Number of nodes with available pods: 0
Dec 17 13:18:58.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:18:59.083: INFO: Number of nodes with available pods: 0
Dec 17 13:18:59.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:00.030: INFO: Number of nodes with available pods: 0
Dec 17 13:19:00.030: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:01.102: INFO: Number of nodes with available pods: 0
Dec 17 13:19:01.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:02.078: INFO: Number of nodes with available pods: 0
Dec 17 13:19:02.078: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:03.178: INFO: Number of nodes with available pods: 0
Dec 17 13:19:03.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:04.028: INFO: Number of nodes with available pods: 0
Dec 17 13:19:04.028: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:05.563: INFO: Number of nodes with available pods: 0
Dec 17 13:19:05.563: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:06.030: INFO: Number of nodes with available pods: 0
Dec 17 13:19:06.030: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:07.027: INFO: Number of nodes with available pods: 0
Dec 17 13:19:07.027: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:08.045: INFO: Number of nodes with available pods: 0
Dec 17 13:19:08.045: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:09.030: INFO: Number of nodes with available pods: 0
Dec 17 13:19:09.030: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:10.129: INFO: Number of nodes with available pods: 0
Dec 17 13:19:10.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:11.103: INFO: Number of nodes with available pods: 0
Dec 17 13:19:11.103: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:12.031: INFO: Number of nodes with available pods: 0
Dec 17 13:19:12.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 17 13:19:13.029: INFO: Number of nodes with available pods: 1
Dec 17 13:19:13.029: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xbp7m, will wait for the garbage collector to delete the pods
Dec 17 13:19:13.124: INFO: Deleting DaemonSet.extensions daemon-set took: 30.010352ms
Dec 17 13:19:13.224: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.532868ms
Dec 17 13:19:22.656: INFO: Number of nodes with available pods: 0
Dec 17 13:19:22.656: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 13:19:22.667: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xbp7m/daemonsets","resourceVersion":"15129969"},"items":null}

Dec 17 13:19:22.672: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xbp7m/pods","resourceVersion":"15129969"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:19:22.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xbp7m" for this suite.
Dec 17 13:19:30.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:19:30.877: INFO: namespace: e2e-tests-daemonsets-xbp7m, resource: bindings, ignored listing per whitelist
Dec 17 13:19:30.922: INFO: namespace e2e-tests-daemonsets-xbp7m deletion completed in 8.227943233s

• [SLOW TEST:45.376 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:19:30.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-dc292508-20cf-11ea-a5ef-0242ac110004
STEP: Creating secret with name secret-projected-all-test-volume-dc2924e0-20cf-11ea-a5ef-0242ac110004
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 17 13:19:31.368: INFO: Waiting up to 5m0s for pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-projected-qkpzm" to be "success or failure"
Dec 17 13:19:31.381: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.254033ms
Dec 17 13:19:33.777: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408805941s
Dec 17 13:19:35.798: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429415808s
Dec 17 13:19:38.121: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752181096s
Dec 17 13:19:41.067: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.698922365s
Dec 17 13:19:43.075: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.706793084s
STEP: Saw pod success
Dec 17 13:19:43.075: INFO: Pod "projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:19:43.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004 container projected-all-volume-test: 
STEP: delete the pod
Dec 17 13:19:44.758: INFO: Waiting for pod projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:19:44.779: INFO: Pod projected-volume-dc2923c8-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:19:44.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qkpzm" for this suite.
Dec 17 13:19:53.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:19:54.047: INFO: namespace: e2e-tests-projected-qkpzm, resource: bindings, ignored listing per whitelist
Dec 17 13:19:54.124: INFO: namespace e2e-tests-projected-qkpzm deletion completed in 8.449299829s

• [SLOW TEST:23.202 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:19:54.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 17 13:19:54.366: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:20:15.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4nfwd" for this suite.
Dec 17 13:20:23.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:20:23.403: INFO: namespace: e2e-tests-init-container-4nfwd, resource: bindings, ignored listing per whitelist
Dec 17 13:20:23.406: INFO: namespace e2e-tests-init-container-4nfwd deletion completed in 8.197555954s

• [SLOW TEST:29.281 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:20:23.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 17 13:20:23.922: INFO: Waiting up to 5m0s for pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004" in namespace "e2e-tests-emptydir-chrz8" to be "success or failure"
Dec 17 13:20:23.943: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.369636ms
Dec 17 13:20:25.955: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032739004s
Dec 17 13:20:27.983: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060772406s
Dec 17 13:20:30.361: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439088048s
Dec 17 13:20:32.405: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482665154s
Dec 17 13:20:34.455: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.533077193s
Dec 17 13:20:36.486: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.563754505s
STEP: Saw pod success
Dec 17 13:20:36.486: INFO: Pod "pod-fb75fd92-20cf-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:20:36.503: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fb75fd92-20cf-11ea-a5ef-0242ac110004 container test-container: 
STEP: delete the pod
Dec 17 13:20:36.850: INFO: Waiting for pod pod-fb75fd92-20cf-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:20:36.869: INFO: Pod pod-fb75fd92-20cf-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:20:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-chrz8" for this suite.
Dec 17 13:20:44.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:20:45.104: INFO: namespace: e2e-tests-emptydir-chrz8, resource: bindings, ignored listing per whitelist
Dec 17 13:20:45.114: INFO: namespace e2e-tests-emptydir-chrz8 deletion completed in 8.215977971s

• [SLOW TEST:21.708 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:20:45.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 17 13:20:45.407: INFO: Waiting up to 5m0s for pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-q76jc" to be "success or failure"
Dec 17 13:20:45.547: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 140.670059ms
Dec 17 13:20:47.584: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177661489s
Dec 17 13:20:49.604: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19755716s
Dec 17 13:20:51.623: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215848708s
Dec 17 13:20:54.621: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.214396553s
Dec 17 13:20:56.636: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.228779131s
Dec 17 13:20:58.684: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.276963506s
Dec 17 13:21:00.707: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.300100332s
STEP: Saw pod success
Dec 17 13:21:00.707: INFO: Pod "downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:21:00.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 17 13:21:01.007: INFO: Waiting for pod downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:21:01.034: INFO: Pod downward-api-085f0a69-20d0-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:21:01.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-q76jc" for this suite.
Dec 17 13:21:07.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:21:07.355: INFO: namespace: e2e-tests-downward-api-q76jc, resource: bindings, ignored listing per whitelist
Dec 17 13:21:07.403: INFO: namespace e2e-tests-downward-api-q76jc deletion completed in 6.359296264s

• [SLOW TEST:22.288 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:21:07.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nkhd4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 13:21:07.548: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 13:21:50.032: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nkhd4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 13:21:50.032: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 13:21:50.937: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:21:50.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nkhd4" for this suite.
Dec 17 13:22:27.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:22:27.142: INFO: namespace: e2e-tests-pod-network-test-nkhd4, resource: bindings, ignored listing per whitelist
Dec 17 13:22:27.224: INFO: namespace e2e-tests-pod-network-test-nkhd4 deletion completed in 36.235226557s

• [SLOW TEST:79.821 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 17 13:22:27.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 17 13:22:27.710: INFO: Waiting up to 5m0s for pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004" in namespace "e2e-tests-downward-api-lwgq8" to be "success or failure"
Dec 17 13:22:27.734: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067549ms
Dec 17 13:22:30.648: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937811367s
Dec 17 13:22:32.688: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.977393167s
Dec 17 13:22:34.857: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146629852s
Dec 17 13:22:37.473: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.762388524s
Dec 17 13:22:39.908: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.197863657s
Dec 17 13:22:41.986: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.275727662s
Dec 17 13:22:44.028: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.317427323s
Dec 17 13:22:46.064: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.353414438s
STEP: Saw pod success
Dec 17 13:22:46.064: INFO: Pod "downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004" satisfied condition "success or failure"
Dec 17 13:22:46.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 17 13:22:46.385: INFO: Waiting for pod downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004 to disappear
Dec 17 13:22:46.400: INFO: Pod downward-api-45437b5e-20d0-11ea-a5ef-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 17 13:22:46.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lwgq8" for this suite.
Dec 17 13:22:54.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 13:22:54.702: INFO: namespace: e2e-tests-downward-api-lwgq8, resource: bindings, ignored listing per whitelist
Dec 17 13:22:54.848: INFO: namespace e2e-tests-downward-api-lwgq8 deletion completed in 8.442114379s

• [SLOW TEST:27.624 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSDec 17 13:22:54.850: INFO: Running AfterSuite actions on all nodes
Dec 17 13:22:54.850: INFO: Running AfterSuite actions on node 1
Dec 17 13:22:54.850: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9337.668 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS