I1231 10:47:14.441176 8 e2e.go:224] Starting e2e run "e78e7d28-2bba-11ea-a129-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577789233 - Will randomize all specs Will run 201 of 2164 specs Dec 31 10:47:14.677: INFO: >>> kubeConfig: /root/.kube/config Dec 31 10:47:14.680: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 31 10:47:14.700: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 31 10:47:14.766: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 31 10:47:14.766: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 31 10:47:14.766: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 31 10:47:14.777: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 31 10:47:14.777: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 31 10:47:14.777: INFO: e2e test version: v1.13.12 Dec 31 10:47:14.780: INFO: kube-apiserver version: v1.13.8 SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:47:14.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Dec 31 10:47:14.911: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 31 10:47:27.677: INFO: Successfully updated pod "labelsupdatee847e143-2bba-11ea-a129-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:47:29.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c66sb" for this suite. Dec 31 10:47:54.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:47:54.113: INFO: namespace: e2e-tests-downward-api-c66sb, resource: bindings, ignored listing per whitelist Dec 31 10:47:54.199: INFO: namespace e2e-tests-downward-api-c66sb deletion completed in 24.2379892s • [SLOW TEST:39.419 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:47:54.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-v9sv6 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-v9sv6 STEP: Deleting pre-stop pod Dec 31 10:48:19.718: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:48:19.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-v9sv6" for this suite. Dec 31 10:49:05.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:49:05.996: INFO: namespace: e2e-tests-prestop-v9sv6, resource: bindings, ignored listing per whitelist Dec 31 10:49:06.133: INFO: namespace e2e-tests-prestop-v9sv6 deletion completed in 46.342195441s • [SLOW TEST:71.934 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:49:06.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 31 10:49:06.379: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fgtqv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fgtqv/configmaps/e2e-watch-test-watch-closed,UID:2ab40787-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671081,Generation:0,CreationTimestamp:2019-12-31 10:49:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 31 10:49:06.380: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fgtqv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fgtqv/configmaps/e2e-watch-test-watch-closed,UID:2ab40787-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671082,Generation:0,CreationTimestamp:2019-12-31 10:49:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 31 10:49:06.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fgtqv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fgtqv/configmaps/e2e-watch-test-watch-closed,UID:2ab40787-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671083,Generation:0,CreationTimestamp:2019-12-31 10:49:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 31 10:49:06.414: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fgtqv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fgtqv/configmaps/e2e-watch-test-watch-closed,UID:2ab40787-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671085,Generation:0,CreationTimestamp:2019-12-31 10:49:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:49:06.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fgtqv" for this suite. Dec 31 10:49:12.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:49:12.898: INFO: namespace: e2e-tests-watch-fgtqv, resource: bindings, ignored listing per whitelist Dec 31 10:49:12.971: INFO: namespace e2e-tests-watch-fgtqv deletion completed in 6.463135624s • [SLOW TEST:6.837 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:49:12.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-2ecbf44c-2bbb-11ea-a129-0242ac110005 STEP: Creating a pod to test consume secrets Dec 31 10:49:13.463: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-rhgms" to be "success or failure" Dec 31 10:49:13.519: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.048179ms Dec 31 10:49:15.697: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233203604s Dec 31 10:49:17.735: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271898915s Dec 31 10:49:20.545: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081031575s Dec 31 10:49:22.566: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.102377415s Dec 31 10:49:24.608: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.144807688s Dec 31 10:49:26.855: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.391603734s STEP: Saw pod success Dec 31 10:49:26.855: INFO: Pod "pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 10:49:26.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 31 10:49:27.073: INFO: Waiting for pod pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005 to disappear Dec 31 10:49:27.084: INFO: Pod pod-projected-secrets-2eccd647-2bbb-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:49:27.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rhgms" for this suite. Dec 31 10:49:35.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:49:35.408: INFO: namespace: e2e-tests-projected-rhgms, resource: bindings, ignored listing per whitelist Dec 31 10:49:35.507: INFO: namespace e2e-tests-projected-rhgms deletion completed in 8.39644112s • [SLOW TEST:22.536 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:49:35.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 10:49:35.726: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:49:45.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-j9dfz" for this suite. Dec 31 10:50:27.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:50:28.143: INFO: namespace: e2e-tests-pods-j9dfz, resource: bindings, ignored listing per whitelist Dec 31 10:50:28.158: INFO: namespace e2e-tests-pods-j9dfz deletion completed in 42.318670206s • [SLOW TEST:52.651 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:50:28.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 31 10:50:28.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:50:30.381: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 31 10:50:30.381: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 31 10:50:30.454: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 31 10:50:30.527: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 31 10:50:30.637: INFO: scanned /root for discovery docs: Dec 31 10:50:30.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:50:56.307: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 31 10:50:56.307: INFO: stdout: "Created e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a\nScaling up e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 31 10:50:56.307: INFO: stdout: "Created e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a\nScaling up e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 31 10:50:56.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:50:56.548: INFO: stderr: "" Dec 31 10:50:56.548: INFO: stdout: "e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z e2e-test-nginx-rc-b6j22 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 31 10:51:01.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:51:01.733: INFO: stderr: "" Dec 31 10:51:01.733: INFO: stdout: "e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z e2e-test-nginx-rc-b6j22 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 31 10:51:06.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:51:06.999: INFO: stderr: "" Dec 31 10:51:07.000: INFO: stdout: "e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z " Dec 31 10:51:07.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:51:07.248: INFO: stderr: "" Dec 31 10:51:07.248: INFO: stdout: "true" Dec 31 10:51:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:51:07.398: INFO: stderr: "" Dec 31 10:51:07.398: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 31 10:51:07.398: INFO: e2e-test-nginx-rc-39d5dc82409b2dfdfe9d472e3b63d49a-zpk2z is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Dec 31 10:51:07.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w57jr' Dec 31 10:51:07.541: INFO: stderr: "" Dec 31 10:51:07.541: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:51:07.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w57jr" for this suite. Dec 31 10:51:31.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:51:31.920: INFO: namespace: e2e-tests-kubectl-w57jr, resource: bindings, ignored listing per whitelist Dec 31 10:51:32.019: INFO: namespace e2e-tests-kubectl-w57jr deletion completed in 24.371049696s • [SLOW TEST:63.861 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:51:32.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 31 10:51:43.143: INFO: Successfully updated pod "pod-update-81ae82d3-2bbb-11ea-a129-0242ac110005" STEP: verifying the updated pod is in kubernetes Dec 31 10:51:43.165: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:51:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-64lmt" for this suite. Dec 31 10:52:11.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:52:11.449: INFO: namespace: e2e-tests-pods-64lmt, resource: bindings, ignored listing per whitelist Dec 31 10:52:11.464: INFO: namespace e2e-tests-pods-64lmt deletion completed in 28.290884687s • [SLOW TEST:39.444 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:52:11.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 10:52:11.598: INFO: Creating deployment "test-recreate-deployment" Dec 31 10:52:11.609: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 31 10:52:11.617: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 31 10:52:13.702: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 31 10:52:13.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 10:52:15.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 10:52:18.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 10:52:20.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 10:52:21.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386331, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 10:52:23.724: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 31 10:52:23.740: INFO: Updating deployment test-recreate-deployment Dec 31 10:52:23.740: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 31 10:52:24.471: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6fvq6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fvq6/deployments/test-recreate-deployment,UID:991e82a9-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671519,Generation:2,CreationTimestamp:2019-12-31 10:52:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-31 10:52:24 +0000 UTC 2019-12-31 10:52:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-31 10:52:24 +0000 UTC 2019-12-31 10:52:11 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 31 10:52:24.504: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6fvq6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fvq6/replicasets/test-recreate-deployment-589c4bfd,UID:a08ef230-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671518,Generation:1,CreationTimestamp:2019-12-31 10:52:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 991e82a9-2bbb-11ea-a994-fa163e34d433 0xc0018e9d2f 0xc0018e9d40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 31 10:52:24.505: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 31 10:52:24.505: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6fvq6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fvq6/replicasets/test-recreate-deployment-5bf7f65dc,UID:9926858f-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671509,Generation:2,CreationTimestamp:2019-12-31 10:52:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 991e82a9-2bbb-11ea-a994-fa163e34d433 0xc0018e9e00 0xc0018e9e01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 31 10:52:24.522: INFO: Pod "test-recreate-deployment-589c4bfd-tqkgr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-tqkgr,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6fvq6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6fvq6/pods/test-recreate-deployment-589c4bfd-tqkgr,UID:a0ad399a-2bbb-11ea-a994-fa163e34d433,ResourceVersion:16671523,Generation:0,CreationTimestamp:2019-12-31 10:52:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a08ef230-2bbb-11ea-a994-fa163e34d433 0xc001cb7c5f 0xc001cb7c70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r99jw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r99jw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r99jw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cb7cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cb7cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 10:52:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 10:52:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 10:52:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 10:52:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 10:52:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:52:24.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6fvq6" for this suite. Dec 31 10:52:32.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:52:33.029: INFO: namespace: e2e-tests-deployment-6fvq6, resource: bindings, ignored listing per whitelist Dec 31 10:52:33.100: INFO: namespace e2e-tests-deployment-6fvq6 deletion completed in 8.569014529s • [SLOW TEST:21.635 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:52:33.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:52:33.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rdmzp" for this suite. Dec 31 10:52:57.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:52:57.706: INFO: namespace: e2e-tests-pods-rdmzp, resource: bindings, ignored listing per whitelist Dec 31 10:52:57.767: INFO: namespace e2e-tests-pods-rdmzp deletion completed in 24.341632898s • [SLOW TEST:24.667 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:52:57.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 31 10:52:57.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-9k5q9' Dec 31 10:52:58.204: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 31 10:52:58.204: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Dec 31 10:53:02.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9k5q9' Dec 31 10:53:02.596: INFO: stderr: "" Dec 31 10:53:02.596: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:53:02.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9k5q9" for this suite. Dec 31 10:53:26.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:53:26.809: INFO: namespace: e2e-tests-kubectl-9k5q9, resource: bindings, ignored listing per whitelist Dec 31 10:53:26.923: INFO: namespace e2e-tests-kubectl-9k5q9 deletion completed in 24.259413827s • [SLOW TEST:29.156 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:53:26.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 10:53:27.337: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:53:38.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fmjb2" for this suite. Dec 31 10:54:22.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:54:22.320: INFO: namespace: e2e-tests-pods-fmjb2, resource: bindings, ignored listing per whitelist Dec 31 10:54:22.368: INFO: namespace e2e-tests-pods-fmjb2 deletion completed in 44.275673864s • [SLOW TEST:55.444 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:54:22.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 31 10:54:35.166: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e7307d91-2bbb-11ea-a129-0242ac110005" Dec 31 10:54:35.166: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e7307d91-2bbb-11ea-a129-0242ac110005" in namespace "e2e-tests-pods-wqkns" to be "terminated due to deadline exceeded" Dec 31 10:54:35.344: INFO: Pod "pod-update-activedeadlineseconds-e7307d91-2bbb-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 177.979003ms Dec 31 10:54:37.403: INFO: Pod "pod-update-activedeadlineseconds-e7307d91-2bbb-11ea-a129-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.236861175s Dec 31 10:54:37.403: INFO: Pod "pod-update-activedeadlineseconds-e7307d91-2bbb-11ea-a129-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:54:37.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wqkns" for this suite. Dec 31 10:54:43.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:54:43.475: INFO: namespace: e2e-tests-pods-wqkns, resource: bindings, ignored listing per whitelist Dec 31 10:54:43.682: INFO: namespace e2e-tests-pods-wqkns deletion completed in 6.269565367s • [SLOW TEST:21.314 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:54:43.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-t9n6z [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 31 10:54:44.026: INFO: Found 0 stateful pods, waiting for 3 Dec 31 10:54:54.041: INFO: Found 2 stateful pods, waiting for 3 Dec 31 10:55:04.090: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 10:55:04.090: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 10:55:04.090: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 31 10:55:14.044: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 10:55:14.044: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 10:55:14.044: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 31 10:55:14.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t9n6z ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 10:55:14.890: INFO: stderr: "" Dec 31 10:55:14.890: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 10:55:14.890: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 31 10:55:24.966: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 31 10:55:35.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t9n6z ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 10:55:35.694: INFO: stderr: "" Dec 31 10:55:35.694: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 31 10:55:35.694: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 31 10:55:45.760: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:55:45.761: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:55:45.761: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:55:45.761: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:55:55.813: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:55:55.813: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:55:55.813: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:56:05.798: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:56:05.798: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:56:05.798: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:56:16.048: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:56:16.048: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:56:25.994: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:56:25.995: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 10:56:35.780: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update STEP: Rolling back to a previous revision Dec 31 10:56:45.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t9n6z ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 10:56:46.694: INFO: stderr: "" Dec 31 10:56:46.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 10:56:46.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 31 10:56:56.766: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 31 10:57:06.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t9n6z ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 10:57:07.423: INFO: stderr: "" Dec 31 10:57:07.423: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 31 10:57:07.423: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 31 10:57:17.517: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:57:17.517: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:17.517: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:28.030: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:57:28.031: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:28.031: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:37.564: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:57:37.564: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:47.544: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update Dec 31 10:57:47.544: INFO: Waiting for Pod e2e-tests-statefulset-t9n6z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 31 10:57:57.927: INFO: Waiting for StatefulSet e2e-tests-statefulset-t9n6z/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 31 10:58:07.549: INFO: Deleting all statefulset in ns e2e-tests-statefulset-t9n6z Dec 31 10:58:07.555: INFO: Scaling statefulset ss2 to 0 Dec 31 10:58:47.597: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 10:58:47.604: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:58:47.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-t9n6z" for this suite. Dec 31 10:58:55.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:58:55.987: INFO: namespace: e2e-tests-statefulset-t9n6z, resource: bindings, ignored listing per whitelist Dec 31 10:58:55.995: INFO: namespace e2e-tests-statefulset-t9n6z deletion completed in 8.256902564s • [SLOW TEST:252.313 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:58:55.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 31 10:58:56.312: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 31 10:58:56.324: INFO: Waiting for terminating namespaces to be deleted... Dec 31 10:58:56.327: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 31 10:58:56.349: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 31 10:58:56.349: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 31 10:58:56.349: INFO: Container weave ready: true, restart count 0 Dec 31 10:58:56.349: INFO: Container weave-npc ready: true, restart count 0 Dec 31 10:58:56.349: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 31 10:58:56.349: INFO: Container coredns ready: true, restart count 0 Dec 31 10:58:56.349: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 31 10:58:56.349: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 31 10:58:56.349: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 31 10:58:56.349: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 31 10:58:56.349: INFO: Container coredns ready: true, restart count 0 Dec 31 10:58:56.349: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 31 10:58:56.349: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Dec 31 10:58:56.489: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-8a738076-2bbc-11ea-a129-0242ac110005.15e56f9681da0eeb], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-25scq/filler-pod-8a738076-2bbc-11ea-a129-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-8a738076-2bbc-11ea-a129-0242ac110005.15e56f97c878b699], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8a738076-2bbc-11ea-a129-0242ac110005.15e56f9884671639], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-8a738076-2bbc-11ea-a129-0242ac110005.15e56f98ae4befea], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e56f98d69ec0b9], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:59:07.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-25scq" for this suite. Dec 31 10:59:13.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:59:14.236: INFO: namespace: e2e-tests-sched-pred-25scq, resource: bindings, ignored listing per whitelist Dec 31 10:59:14.236: INFO: namespace e2e-tests-sched-pred-25scq deletion completed in 6.449311391s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.240 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:59:14.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 10:59:15.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Dec 31 10:59:15.962: INFO: stderr: "" Dec 31 10:59:15.963: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Dec 31 10:59:15.966: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:59:15.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5nzs8" for this suite. Dec 31 10:59:22.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 10:59:22.170: INFO: namespace: e2e-tests-kubectl-5nzs8, resource: bindings, ignored listing per whitelist Dec 31 10:59:22.293: INFO: namespace e2e-tests-kubectl-5nzs8 deletion completed in 6.302089982s S [SKIPPING] [8.057 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 10:59:15.966: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 10:59:22.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-bvfm STEP: Creating a pod to test atomic-volume-subpath Dec 31 10:59:22.512: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bvfm" in namespace "e2e-tests-subpath-tzh47" to be "success or failure" Dec 31 10:59:22.617: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 105.79983ms Dec 31 10:59:24.944: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431915673s Dec 31 10:59:26.963: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451023473s Dec 31 10:59:28.981: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469027982s Dec 31 10:59:30.994: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482806249s Dec 31 10:59:33.080: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.567866188s Dec 31 10:59:35.091: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.579653911s Dec 31 10:59:37.211: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.698973076s Dec 31 10:59:39.224: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.712392612s Dec 31 10:59:41.253: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 18.741002409s Dec 31 10:59:43.268: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 20.755977955s Dec 31 10:59:45.288: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 22.776435566s Dec 31 10:59:47.303: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 24.791276412s Dec 31 10:59:49.318: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 26.806785951s Dec 31 10:59:51.335: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 28.82359367s Dec 31 10:59:53.355: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 30.843721262s Dec 31 10:59:55.373: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 32.861230951s Dec 31 10:59:57.391: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Running", Reason="", readiness=false. Elapsed: 34.879621956s Dec 31 10:59:59.413: INFO: Pod "pod-subpath-test-configmap-bvfm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.901215252s STEP: Saw pod success Dec 31 10:59:59.413: INFO: Pod "pod-subpath-test-configmap-bvfm" satisfied condition "success or failure" Dec 31 10:59:59.419: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-bvfm container test-container-subpath-configmap-bvfm: STEP: delete the pod Dec 31 10:59:59.825: INFO: Waiting for pod pod-subpath-test-configmap-bvfm to disappear Dec 31 10:59:59.872: INFO: Pod pod-subpath-test-configmap-bvfm no longer exists STEP: Deleting pod pod-subpath-test-configmap-bvfm Dec 31 10:59:59.872: INFO: Deleting pod "pod-subpath-test-configmap-bvfm" in namespace "e2e-tests-subpath-tzh47" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 10:59:59.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tzh47" for this suite. Dec 31 11:00:06.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:00:06.156: INFO: namespace: e2e-tests-subpath-tzh47, resource: bindings, ignored listing per whitelist Dec 31 11:00:06.249: INFO: namespace e2e-tests-subpath-tzh47 deletion completed in 6.272635118s • [SLOW TEST:43.956 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:00:06.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 11:00:06.585: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 31 11:00:06.714: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 31 11:00:12.924: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 31 11:00:16.983: INFO: Creating deployment "test-rolling-update-deployment" Dec 31 11:00:17.004: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 31 11:00:17.109: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 31 11:00:19.305: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 31 11:00:19.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 11:00:21.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 11:00:23.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 11:00:25.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 11:00:27.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713386817, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 31 11:00:29.496: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 31 11:00:29.516: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-xzvzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xzvzr/deployments/test-rolling-update-deployment,UID:ba6e9b78-2bbc-11ea-a994-fa163e34d433,ResourceVersion:16672702,Generation:1,CreationTimestamp:2019-12-31 11:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-31 11:00:17 +0000 UTC 2019-12-31 11:00:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-31 11:00:27 +0000 UTC 2019-12-31 11:00:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 31 11:00:29.526: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-xzvzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xzvzr/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ba880802-2bbc-11ea-a994-fa163e34d433,ResourceVersion:16672693,Generation:1,CreationTimestamp:2019-12-31 11:00:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ba6e9b78-2bbc-11ea-a994-fa163e34d433 0xc000e0dad7 0xc000e0dad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 31 11:00:29.526: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 31 11:00:29.526: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-xzvzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xzvzr/replicasets/test-rolling-update-controller,UID:b43c48e1-2bbc-11ea-a994-fa163e34d433,ResourceVersion:16672701,Generation:2,CreationTimestamp:2019-12-31 11:00:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ba6e9b78-2bbc-11ea-a994-fa163e34d433 0xc000e0da17 0xc000e0da18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 31 11:00:29.536: INFO: Pod "test-rolling-update-deployment-75db98fb4c-52n2k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-52n2k,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-xzvzr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xzvzr/pods/test-rolling-update-deployment-75db98fb4c-52n2k,UID:baa683e4-2bbc-11ea-a994-fa163e34d433,ResourceVersion:16672692,Generation:0,CreationTimestamp:2019-12-31 11:00:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ba880802-2bbc-11ea-a994-fa163e34d433 0xc001eaf967 0xc001eaf968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6tjs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6tjs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x6tjs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eaf9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eaf9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:00:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:00:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:00:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-31 11:00:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-31 11:00:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b6702c8871552855a5e10159cfd5d2e498fb79dd2954f9d519275891c7a253d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:00:29.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xzvzr" for this suite. Dec 31 11:00:39.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:00:40.048: INFO: namespace: e2e-tests-deployment-xzvzr, resource: bindings, ignored listing per whitelist Dec 31 11:00:40.055: INFO: namespace e2e-tests-deployment-xzvzr deletion completed in 10.511293121s • [SLOW TEST:33.806 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:00:40.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 31 11:00:40.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-56tqk" to be "success or failure" Dec 31 11:00:40.415: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.253082ms Dec 31 11:00:42.440: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036136944s Dec 31 11:00:44.473: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069046761s Dec 31 11:00:47.675: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.271082664s Dec 31 11:00:49.716: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.312001281s Dec 31 11:00:51.740: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.335692057s STEP: Saw pod success Dec 31 11:00:51.740: INFO: Pod "downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:00:51.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005 container client-container: STEP: delete the pod Dec 31 11:00:52.806: INFO: Waiting for pod downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005 to disappear Dec 31 11:00:53.149: INFO: Pod downwardapi-volume-c85b73ed-2bbc-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:00:53.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-56tqk" for this suite. Dec 31 11:00:59.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:00:59.262: INFO: namespace: e2e-tests-downward-api-56tqk, resource: bindings, ignored listing per whitelist Dec 31 11:00:59.370: INFO: namespace e2e-tests-downward-api-56tqk deletion completed in 6.168202217s • [SLOW TEST:19.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:00:59.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d3debaaf-2bbc-11ea-a129-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 31 11:00:59.700: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-ws4m7" to be "success or failure" Dec 31 11:00:59.715: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.455677ms Dec 31 11:01:01.728: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027819827s Dec 31 11:01:03.778: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077650692s Dec 31 11:01:06.015: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314562534s Dec 31 11:01:08.036: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335578856s Dec 31 11:01:10.045: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.344586267s STEP: Saw pod success Dec 31 11:01:10.045: INFO: Pod "pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:01:10.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 31 11:01:10.412: INFO: Waiting for pod pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005 to disappear Dec 31 11:01:10.430: INFO: Pod pod-configmaps-d3e0b3ff-2bbc-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:01:10.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ws4m7" for this suite. Dec 31 11:01:16.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:01:16.674: INFO: namespace: e2e-tests-configmap-ws4m7, resource: bindings, ignored listing per whitelist Dec 31 11:01:16.747: INFO: namespace e2e-tests-configmap-ws4m7 deletion completed in 6.310358049s • [SLOW TEST:17.377 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:01:16.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:01:23.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-skrsx" for this suite. Dec 31 11:01:29.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:01:29.691: INFO: namespace: e2e-tests-namespaces-skrsx, resource: bindings, ignored listing per whitelist Dec 31 11:01:29.798: INFO: namespace e2e-tests-namespaces-skrsx deletion completed in 6.226054334s STEP: Destroying namespace "e2e-tests-nsdeletetest-l9pm5" for this suite. Dec 31 11:01:29.802: INFO: Namespace e2e-tests-nsdeletetest-l9pm5 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-k4t95" for this suite. Dec 31 11:01:35.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:01:35.905: INFO: namespace: e2e-tests-nsdeletetest-k4t95, resource: bindings, ignored listing per whitelist Dec 31 11:01:35.979: INFO: namespace e2e-tests-nsdeletetest-k4t95 deletion completed in 6.177672268s • [SLOW TEST:19.232 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:01:35.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 31 11:01:36.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-5sfwf" to be "success or failure" Dec 31 11:01:36.276: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.224759ms Dec 31 11:01:38.284: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063149848s Dec 31 11:01:40.329: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108523381s Dec 31 11:01:42.942: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.721524062s Dec 31 11:01:45.037: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816743721s Dec 31 11:01:47.075: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.85403289s STEP: Saw pod success Dec 31 11:01:47.075: INFO: Pod "downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:01:47.089: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005 container client-container: STEP: delete the pod Dec 31 11:01:47.374: INFO: Waiting for pod downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005 to disappear Dec 31 11:01:47.380: INFO: Pod downwardapi-volume-e99c4708-2bbc-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:01:47.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5sfwf" for this suite. Dec 31 11:01:53.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:01:53.558: INFO: namespace: e2e-tests-projected-5sfwf, resource: bindings, ignored listing per whitelist Dec 31 11:01:53.610: INFO: namespace e2e-tests-projected-5sfwf deletion completed in 6.209575682s • [SLOW TEST:17.630 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:01:53.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 31 11:01:54.006: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 31 11:01:59.022: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:02:01.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-wzx2t" for this suite. Dec 31 11:02:13.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:02:13.986: INFO: namespace: e2e-tests-replication-controller-wzx2t, resource: bindings, ignored listing per whitelist Dec 31 11:02:14.005: INFO: namespace e2e-tests-replication-controller-wzx2t deletion completed in 12.594664315s • [SLOW TEST:20.395 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:02:14.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 11:02:14.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 31 11:02:14.864: INFO: stderr: "" Dec 31 11:02:14.865: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:02:14.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7rq2g" for this suite. Dec 31 11:02:20.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:02:21.051: INFO: namespace: e2e-tests-kubectl-7rq2g, resource: bindings, ignored listing per whitelist Dec 31 11:02:21.105: INFO: namespace e2e-tests-kubectl-7rq2g deletion completed in 6.220803206s • [SLOW TEST:7.099 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:02:21.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-048db717-2bbd-11ea-a129-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-048db717-2bbd-11ea-a129-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:03:49.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-stw5g" for this suite. Dec 31 11:04:13.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:04:13.567: INFO: namespace: e2e-tests-projected-stw5g, resource: bindings, ignored listing per whitelist Dec 31 11:04:13.655: INFO: namespace e2e-tests-projected-stw5g deletion completed in 24.22252315s • [SLOW TEST:112.550 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:04:13.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 31 11:04:14.098: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 31 11:04:14.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:16.275: INFO: stderr: "" Dec 31 11:04:16.275: INFO: stdout: "service/redis-slave created\n" Dec 31 11:04:16.276: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 31 11:04:16.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:16.925: INFO: stderr: "" Dec 31 11:04:16.925: INFO: stdout: "service/redis-master created\n" Dec 31 11:04:16.925: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 31 11:04:16.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:17.554: INFO: stderr: "" Dec 31 11:04:17.554: INFO: stdout: "service/frontend created\n" Dec 31 11:04:17.555: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 31 11:04:17.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:18.138: INFO: stderr: "" Dec 31 11:04:18.138: INFO: stdout: "deployment.extensions/frontend created\n" Dec 31 11:04:18.139: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 31 11:04:18.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:18.845: INFO: stderr: "" Dec 31 11:04:18.845: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 31 11:04:18.846: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 31 11:04:18.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:20.389: INFO: stderr: "" Dec 31 11:04:20.389: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 31 11:04:20.389: INFO: Waiting for all frontend pods to be Running. Dec 31 11:04:45.441: INFO: Waiting for frontend to serve content. Dec 31 11:04:49.655: INFO: Trying to add a new entry to the guestbook. Dec 31 11:04:49.720: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 31 11:04:49.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:50.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:50.145: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 31 11:04:50.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:50.337: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:50.337: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 31 11:04:50.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:50.781: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:50.781: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 31 11:04:50.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:50.971: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:50.971: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 31 11:04:50.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:51.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:51.250: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 31 11:04:51.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xfl2r' Dec 31 11:04:51.633: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 11:04:51.633: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:04:51.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xfl2r" for this suite. Dec 31 11:05:35.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:05:35.862: INFO: namespace: e2e-tests-kubectl-xfl2r, resource: bindings, ignored listing per whitelist Dec 31 11:05:35.981: INFO: namespace e2e-tests-kubectl-xfl2r deletion completed in 44.329580326s • [SLOW TEST:82.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:05:35.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-78c04229-2bbd-11ea-a129-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-78c04229-2bbd-11ea-a129-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:07:08.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xjjgm" for this suite. Dec 31 11:07:32.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:07:32.656: INFO: namespace: e2e-tests-configmap-xjjgm, resource: bindings, ignored listing per whitelist Dec 31 11:07:32.713: INFO: namespace e2e-tests-configmap-xjjgm deletion completed in 24.165896512s • [SLOW TEST:116.732 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:07:32.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1231 11:08:03.525569 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 31 11:08:03.525: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:08:03.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ch8fq" for this suite. Dec 31 11:08:14.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:08:15.498: INFO: namespace: e2e-tests-gc-ch8fq, resource: bindings, ignored listing per whitelist Dec 31 11:08:15.590: INFO: namespace e2e-tests-gc-ch8fq deletion completed in 12.058151352s • [SLOW TEST:42.877 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:08:15.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 31 11:08:15.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-ql54w" to be "success or failure" Dec 31 11:08:15.988: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.600872ms Dec 31 11:08:17.999: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022295133s Dec 31 11:08:20.027: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05079903s Dec 31 11:08:22.680: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703387616s Dec 31 11:08:24.930: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953894055s Dec 31 11:08:27.178: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.201604116s STEP: Saw pod success Dec 31 11:08:27.178: INFO: Pod "downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:08:27.191: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005 container client-container: STEP: delete the pod Dec 31 11:08:27.446: INFO: Waiting for pod downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005 to disappear Dec 31 11:08:27.475: INFO: Pod downwardapi-volume-d7e1044b-2bbd-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:08:27.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ql54w" for this suite. Dec 31 11:08:35.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:08:35.594: INFO: namespace: e2e-tests-projected-ql54w, resource: bindings, ignored listing per whitelist Dec 31 11:08:35.663: INFO: namespace e2e-tests-projected-ql54w deletion completed in 8.176743589s • [SLOW TEST:20.072 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:08:35.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e3d29340-2bbd-11ea-a129-0242ac110005 STEP: Creating a pod to test consume secrets Dec 31 11:08:36.051: INFO: Waiting up to 5m0s for pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-w6b7d" to be "success or failure" Dec 31 11:08:36.099: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.884163ms Dec 31 11:08:38.115: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064035313s Dec 31 11:08:40.191: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139063067s Dec 31 11:08:42.636: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584146746s Dec 31 11:08:44.823: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.77146375s Dec 31 11:08:46.847: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.795336421s Dec 31 11:08:48.912: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.860075479s STEP: Saw pod success Dec 31 11:08:48.912: INFO: Pod "pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:08:48.926: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 31 11:08:49.063: INFO: Waiting for pod pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005 to disappear Dec 31 11:08:49.079: INFO: Pod pod-secrets-e3d3444b-2bbd-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:08:49.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w6b7d" for this suite. Dec 31 11:08:55.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:08:55.324: INFO: namespace: e2e-tests-secrets-w6b7d, resource: bindings, ignored listing per whitelist Dec 31 11:08:55.324: INFO: namespace e2e-tests-secrets-w6b7d deletion completed in 6.236552063s • [SLOW TEST:19.661 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:08:55.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 31 11:08:55.497: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:09:13.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-bf978" for this suite. Dec 31 11:09:21.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:09:21.692: INFO: namespace: e2e-tests-init-container-bf978, resource: bindings, ignored listing per whitelist Dec 31 11:09:21.707: INFO: namespace e2e-tests-init-container-bf978 deletion completed in 8.261463014s • [SLOW TEST:26.382 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:09:21.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 31 11:09:21.949: INFO: Waiting up to 5m0s for pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005" in namespace "e2e-tests-var-expansion-28lwq" to be "success or failure" Dec 31 11:09:22.011: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.608124ms Dec 31 11:09:24.185: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235718564s Dec 31 11:09:26.238: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288880734s Dec 31 11:09:28.597: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648355861s Dec 31 11:09:30.630: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.680882416s STEP: Saw pod success Dec 31 11:09:30.630: INFO: Pod "var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:09:30.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005 container dapi-container: STEP: delete the pod Dec 31 11:09:31.110: INFO: Waiting for pod var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005 to disappear Dec 31 11:09:31.131: INFO: Pod var-expansion-ff3dc5b1-2bbd-11ea-a129-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:09:31.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-28lwq" for this suite. Dec 31 11:09:37.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:09:37.365: INFO: namespace: e2e-tests-var-expansion-28lwq, resource: bindings, ignored listing per whitelist Dec 31 11:09:37.473: INFO: namespace e2e-tests-var-expansion-28lwq deletion completed in 6.332771358s • [SLOW TEST:15.766 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:09:37.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 31 11:09:37.689: INFO: Waiting up to 5m0s for pod "pod-08a07559-2bbe-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-52p5b" to be "success or failure" Dec 31 11:09:37.693: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.837358ms Dec 31 11:09:39.699: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009900946s Dec 31 11:09:41.713: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023894293s Dec 31 11:09:44.192: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503135764s Dec 31 11:09:46.205: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515699621s Dec 31 11:09:48.225: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.535295219s STEP: Saw pod success Dec 31 11:09:48.225: INFO: Pod "pod-08a07559-2bbe-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:09:48.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-08a07559-2bbe-11ea-a129-0242ac110005 container test-container: STEP: delete the pod Dec 31 11:09:48.277: INFO: Waiting for pod pod-08a07559-2bbe-11ea-a129-0242ac110005 to disappear Dec 31 11:09:48.288: INFO: Pod pod-08a07559-2bbe-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:09:48.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-52p5b" for this suite. Dec 31 11:09:55.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:09:55.861: INFO: namespace: e2e-tests-emptydir-52p5b, resource: bindings, ignored listing per whitelist Dec 31 11:09:55.905: INFO: namespace e2e-tests-emptydir-52p5b deletion completed in 7.598938607s • [SLOW TEST:18.433 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:09:55.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-13c2a1ae-2bbe-11ea-a129-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 31 11:09:56.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-n858p" to be "success or failure" Dec 31 11:09:56.636: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 207.207761ms Dec 31 11:09:58.660: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230280294s Dec 31 11:10:00.697: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26746035s Dec 31 11:10:02.821: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392165478s Dec 31 11:10:05.540: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.110524078s Dec 31 11:10:07.580: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.151095206s STEP: Saw pod success Dec 31 11:10:07.580: INFO: Pod "pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005" satisfied condition "success or failure" Dec 31 11:10:07.594: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 31 11:10:08.174: INFO: Waiting for pod pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005 to disappear Dec 31 11:10:08.184: INFO: Pod pod-projected-configmaps-13c55d58-2bbe-11ea-a129-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:10:08.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n858p" for this suite. Dec 31 11:10:14.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:10:14.350: INFO: namespace: e2e-tests-projected-n858p, resource: bindings, ignored listing per whitelist Dec 31 11:10:14.404: INFO: namespace e2e-tests-projected-n858p deletion completed in 6.206550071s • [SLOW TEST:18.498 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:10:14.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:10:27.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-75r7x" for this suite. Dec 31 11:10:52.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:10:52.288: INFO: namespace: e2e-tests-replication-controller-75r7x, resource: bindings, ignored listing per whitelist Dec 31 11:10:52.375: INFO: namespace e2e-tests-replication-controller-75r7x deletion completed in 24.407876432s • [SLOW TEST:37.971 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:10:52.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-356a40d1-2bbe-11ea-a129-0242ac110005 STEP: Creating secret with name s-test-opt-upd-356a4178-2bbe-11ea-a129-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-356a40d1-2bbe-11ea-a129-0242ac110005 STEP: Updating secret s-test-opt-upd-356a4178-2bbe-11ea-a129-0242ac110005 STEP: Creating secret with name s-test-opt-create-356a41b2-2bbe-11ea-a129-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 31 11:12:15.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qfh6j" for this suite. Dec 31 11:12:41.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 11:12:41.934: INFO: namespace: e2e-tests-secrets-qfh6j, resource: bindings, ignored listing per whitelist Dec 31 11:12:42.036: INFO: namespace e2e-tests-secrets-qfh6j deletion completed in 26.303435114s • [SLOW TEST:109.661 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 31 11:12:42.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 31 11:12:42.517: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 28.27461ms)
Dec 31 11:12:42.560: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 43.330845ms)
Dec 31 11:12:42.637: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 76.299665ms)
Dec 31 11:12:42.652: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.481479ms)
Dec 31 11:12:42.670: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.905697ms)
Dec 31 11:12:42.677: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.84956ms)
Dec 31 11:12:42.683: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.497928ms)
Dec 31 11:12:42.688: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.921465ms)
Dec 31 11:12:42.693: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.524031ms)
Dec 31 11:12:42.700: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.770855ms)
Dec 31 11:12:42.706: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.640451ms)
Dec 31 11:12:42.712: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.992914ms)
Dec 31 11:12:42.718: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.688417ms)
Dec 31 11:12:42.723: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.143059ms)
Dec 31 11:12:42.729: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.494655ms)
Dec 31 11:12:42.812: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 83.229828ms)
Dec 31 11:12:42.824: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.647219ms)
Dec 31 11:12:42.833: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.509267ms)
Dec 31 11:12:42.838: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.865017ms)
Dec 31 11:12:42.844: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.772392ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:12:42.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-jszk7" for this suite.
Dec 31 11:12:48.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:12:49.005: INFO: namespace: e2e-tests-proxy-jszk7, resource: bindings, ignored listing per whitelist
Dec 31 11:12:49.198: INFO: namespace e2e-tests-proxy-jszk7 deletion completed in 6.347680098s

• [SLOW TEST:7.162 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:12:49.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-7adc7a33-2bbe-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 11:12:49.565: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-d89l6" to be "success or failure"
Dec 31 11:12:49.700: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 135.397957ms
Dec 31 11:12:51.715: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150095151s
Dec 31 11:12:53.745: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179753036s
Dec 31 11:12:55.759: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193898883s
Dec 31 11:12:57.775: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210036922s
Dec 31 11:12:59.798: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232891885s
STEP: Saw pod success
Dec 31 11:12:59.798: INFO: Pod "pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:12:59.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 11:13:00.588: INFO: Waiting for pod pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005 to disappear
Dec 31 11:13:00.945: INFO: Pod pod-projected-secrets-7add3bbf-2bbe-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:13:00.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d89l6" for this suite.
Dec 31 11:13:06.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:13:07.086: INFO: namespace: e2e-tests-projected-d89l6, resource: bindings, ignored listing per whitelist
Dec 31 11:13:07.212: INFO: namespace e2e-tests-projected-d89l6 deletion completed in 6.254390973s

• [SLOW TEST:18.013 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:13:07.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:13:07.553: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.706523ms)
Dec 31 11:13:07.558: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.241465ms)
Dec 31 11:13:07.567: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.801613ms)
Dec 31 11:13:07.571: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.209616ms)
Dec 31 11:13:07.581: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.47906ms)
Dec 31 11:13:07.595: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.619602ms)
Dec 31 11:13:07.637: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.202603ms)
Dec 31 11:13:07.645: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.018794ms)
Dec 31 11:13:07.654: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.979722ms)
Dec 31 11:13:07.661: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.367423ms)
Dec 31 11:13:07.668: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.707026ms)
Dec 31 11:13:07.674: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.908481ms)
Dec 31 11:13:07.679: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.90382ms)
Dec 31 11:13:07.685: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.99635ms)
Dec 31 11:13:07.690: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.241995ms)
Dec 31 11:13:07.695: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.274905ms)
Dec 31 11:13:07.700: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.758642ms)
Dec 31 11:13:07.706: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.206675ms)
Dec 31 11:13:07.712: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.771489ms)
Dec 31 11:13:07.718: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.774144ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:13:07.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-nwvxx" for this suite.
Dec 31 11:13:13.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:13:14.040: INFO: namespace: e2e-tests-proxy-nwvxx, resource: bindings, ignored listing per whitelist
Dec 31 11:13:14.047: INFO: namespace e2e-tests-proxy-nwvxx deletion completed in 6.324526306s

• [SLOW TEST:6.835 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:13:14.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-89ac2c4e-2bbe-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:13:14.238: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-4lzmr" to be "success or failure"
Dec 31 11:13:14.247: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749706ms
Dec 31 11:13:16.713: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47472391s
Dec 31 11:13:18.727: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488585017s
Dec 31 11:13:20.758: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51942341s
Dec 31 11:13:22.806: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5675528s
Dec 31 11:13:24.850: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.611882445s
Dec 31 11:13:26.871: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.632054948s
STEP: Saw pod success
Dec 31 11:13:26.871: INFO: Pod "pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:13:26.874: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 11:13:27.047: INFO: Waiting for pod pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005 to disappear
Dec 31 11:13:27.074: INFO: Pod pod-projected-configmaps-89acebc8-2bbe-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:13:27.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4lzmr" for this suite.
Dec 31 11:13:33.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:13:33.292: INFO: namespace: e2e-tests-projected-4lzmr, resource: bindings, ignored listing per whitelist
Dec 31 11:13:33.381: INFO: namespace e2e-tests-projected-4lzmr deletion completed in 6.293362019s

• [SLOW TEST:19.333 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:13:33.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ltmwt
Dec 31 11:13:43.800: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ltmwt
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 11:13:43.851: INFO: Initial restart count of pod liveness-http is 0
Dec 31 11:14:02.118: INFO: Restart count of pod e2e-tests-container-probe-ltmwt/liveness-http is now 1 (18.267543393s elapsed)
Dec 31 11:14:22.385: INFO: Restart count of pod e2e-tests-container-probe-ltmwt/liveness-http is now 2 (38.534291809s elapsed)
Dec 31 11:14:42.841: INFO: Restart count of pod e2e-tests-container-probe-ltmwt/liveness-http is now 3 (58.990386087s elapsed)
Dec 31 11:15:01.082: INFO: Restart count of pod e2e-tests-container-probe-ltmwt/liveness-http is now 4 (1m17.23109781s elapsed)
Dec 31 11:16:15.926: INFO: Restart count of pod e2e-tests-container-probe-ltmwt/liveness-http is now 5 (2m32.075696986s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:16:16.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ltmwt" for this suite.
Dec 31 11:16:22.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:16:22.531: INFO: namespace: e2e-tests-container-probe-ltmwt, resource: bindings, ignored listing per whitelist
Dec 31 11:16:22.663: INFO: namespace e2e-tests-container-probe-ltmwt deletion completed in 6.482683424s

• [SLOW TEST:169.282 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:16:22.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:16:22.961: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 11:16:23.037: INFO: Number of nodes with available pods: 0
Dec 31 11:16:23.037: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:24.184: INFO: Number of nodes with available pods: 0
Dec 31 11:16:24.184: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:25.419: INFO: Number of nodes with available pods: 0
Dec 31 11:16:25.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:26.058: INFO: Number of nodes with available pods: 0
Dec 31 11:16:26.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:27.086: INFO: Number of nodes with available pods: 0
Dec 31 11:16:27.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:28.088: INFO: Number of nodes with available pods: 0
Dec 31 11:16:28.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:29.432: INFO: Number of nodes with available pods: 0
Dec 31 11:16:29.432: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:30.058: INFO: Number of nodes with available pods: 0
Dec 31 11:16:30.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:31.066: INFO: Number of nodes with available pods: 0
Dec 31 11:16:31.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:32.063: INFO: Number of nodes with available pods: 0
Dec 31 11:16:32.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:33.099: INFO: Number of nodes with available pods: 1
Dec 31 11:16:33.099: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 31 11:16:33.189: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:34.566: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:35.378: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:36.389: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:37.384: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:38.384: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:39.381: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:39.381: INFO: Pod daemon-set-shcqn is not available
Dec 31 11:16:40.390: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:40.390: INFO: Pod daemon-set-shcqn is not available
Dec 31 11:16:41.383: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:41.383: INFO: Pod daemon-set-shcqn is not available
Dec 31 11:16:42.392: INFO: Wrong image for pod: daemon-set-shcqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 11:16:42.392: INFO: Pod daemon-set-shcqn is not available
Dec 31 11:16:43.911: INFO: Pod daemon-set-fkc7s is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 31 11:16:43.944: INFO: Number of nodes with available pods: 0
Dec 31 11:16:43.944: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:44.980: INFO: Number of nodes with available pods: 0
Dec 31 11:16:44.980: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:45.993: INFO: Number of nodes with available pods: 0
Dec 31 11:16:45.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:46.996: INFO: Number of nodes with available pods: 0
Dec 31 11:16:46.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:48.476: INFO: Number of nodes with available pods: 0
Dec 31 11:16:48.476: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:48.991: INFO: Number of nodes with available pods: 0
Dec 31 11:16:48.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:50.047: INFO: Number of nodes with available pods: 0
Dec 31 11:16:50.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:50.987: INFO: Number of nodes with available pods: 0
Dec 31 11:16:50.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:16:51.971: INFO: Number of nodes with available pods: 1
Dec 31 11:16:51.971: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nlpbq, will wait for the garbage collector to delete the pods
Dec 31 11:16:52.073: INFO: Deleting DaemonSet.extensions daemon-set took: 24.77916ms
Dec 31 11:16:52.173: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.331498ms
Dec 31 11:17:02.697: INFO: Number of nodes with available pods: 0
Dec 31 11:17:02.697: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 11:17:02.703: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nlpbq/daemonsets","resourceVersion":"16674735"},"items":null}

Dec 31 11:17:02.707: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nlpbq/pods","resourceVersion":"16674735"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:17:02.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nlpbq" for this suite.
Dec 31 11:17:10.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:17:10.839: INFO: namespace: e2e-tests-daemonsets-nlpbq, resource: bindings, ignored listing per whitelist
Dec 31 11:17:11.030: INFO: namespace e2e-tests-daemonsets-nlpbq deletion completed in 8.279815943s

• [SLOW TEST:48.366 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:17:11.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-16f0dd6f-2bbf-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 11:17:11.235: INFO: Waiting up to 5m0s for pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-rr2tm" to be "success or failure"
Dec 31 11:17:11.260: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.454492ms
Dec 31 11:17:13.537: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302070453s
Dec 31 11:17:15.547: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312259447s
Dec 31 11:17:17.924: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689511547s
Dec 31 11:17:19.943: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707946306s
Dec 31 11:17:21.966: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.73108513s
STEP: Saw pod success
Dec 31 11:17:21.966: INFO: Pod "pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:17:21.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 11:17:22.143: INFO: Waiting for pod pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:17:22.189: INFO: Pod pod-secrets-16f1e187-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:17:22.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rr2tm" for this suite.
Dec 31 11:17:28.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:17:28.427: INFO: namespace: e2e-tests-secrets-rr2tm, resource: bindings, ignored listing per whitelist
Dec 31 11:17:28.655: INFO: namespace e2e-tests-secrets-rr2tm deletion completed in 6.442874951s

• [SLOW TEST:17.625 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:17:28.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:17:57.109: INFO: Container started at 2019-12-31 11:17:37 +0000 UTC, pod became ready at 2019-12-31 11:17:55 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:17:57.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-m9ls5" for this suite.
Dec 31 11:18:21.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:18:21.290: INFO: namespace: e2e-tests-container-probe-m9ls5, resource: bindings, ignored listing per whitelist
Dec 31 11:18:21.368: INFO: namespace e2e-tests-container-probe-m9ls5 deletion completed in 24.244723294s

• [SLOW TEST:52.712 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:18:21.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 31 11:18:21.667: INFO: Waiting up to 5m0s for pod "pod-40efb817-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-gmsq8" to be "success or failure"
Dec 31 11:18:21.694: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.972528ms
Dec 31 11:18:24.220: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552989873s
Dec 31 11:18:26.236: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568259708s
Dec 31 11:18:28.631: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.963683356s
Dec 31 11:18:30.698: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.030913606s
Dec 31 11:18:32.767: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.099699498s
STEP: Saw pod success
Dec 31 11:18:32.767: INFO: Pod "pod-40efb817-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:18:32.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-40efb817-2bbf-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:18:32.948: INFO: Waiting for pod pod-40efb817-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:18:32.968: INFO: Pod pod-40efb817-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:18:32.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gmsq8" for this suite.
Dec 31 11:18:39.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:18:39.151: INFO: namespace: e2e-tests-emptydir-gmsq8, resource: bindings, ignored listing per whitelist
Dec 31 11:18:39.233: INFO: namespace e2e-tests-emptydir-gmsq8 deletion completed in 6.251077746s

• [SLOW TEST:17.865 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:18:39.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:19:39.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-phj86" for this suite.
Dec 31 11:19:45.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:19:46.013: INFO: namespace: e2e-tests-container-runtime-phj86, resource: bindings, ignored listing per whitelist
Dec 31 11:19:46.083: INFO: namespace e2e-tests-container-runtime-phj86 deletion completed in 6.25059263s

• [SLOW TEST:66.849 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:19:46.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 11:19:46.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rckg9'
Dec 31 11:19:49.286: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 11:19:49.286: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 31 11:19:51.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rckg9'
Dec 31 11:19:52.264: INFO: stderr: ""
Dec 31 11:19:52.265: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:19:52.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rckg9" for this suite.
Dec 31 11:19:58.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:19:58.879: INFO: namespace: e2e-tests-kubectl-rckg9, resource: bindings, ignored listing per whitelist
Dec 31 11:19:58.882: INFO: namespace e2e-tests-kubectl-rckg9 deletion completed in 6.560745055s

• [SLOW TEST:12.799 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:19:58.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 31 11:19:59.086: INFO: Waiting up to 5m0s for pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-4hz7s" to be "success or failure"
Dec 31 11:19:59.114: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.059637ms
Dec 31 11:20:01.319: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233024509s
Dec 31 11:20:03.331: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244595041s
Dec 31 11:20:05.349: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262520595s
Dec 31 11:20:07.358: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271554794s
Dec 31 11:20:09.379: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292585432s
STEP: Saw pod success
Dec 31 11:20:09.379: INFO: Pod "downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:20:09.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 11:20:09.492: INFO: Waiting for pod downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:20:10.670: INFO: Pod downward-api-7aff55a6-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:20:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4hz7s" for this suite.
Dec 31 11:20:16.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:20:17.019: INFO: namespace: e2e-tests-downward-api-4hz7s, resource: bindings, ignored listing per whitelist
Dec 31 11:20:17.062: INFO: namespace e2e-tests-downward-api-4hz7s deletion completed in 6.176619808s

• [SLOW TEST:18.179 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:20:17.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:20:17.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-tzncd" to be "success or failure"
Dec 31 11:20:17.364: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.809433ms
Dec 31 11:20:19.568: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212291362s
Dec 31 11:20:21.574: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218507671s
Dec 31 11:20:23.956: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600652659s
Dec 31 11:20:25.983: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.627555716s
Dec 31 11:20:28.001: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645693711s
STEP: Saw pod success
Dec 31 11:20:28.001: INFO: Pod "downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:20:28.017: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:20:28.143: INFO: Waiting for pod downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:20:28.159: INFO: Pod downwardapi-volume-85e4dbce-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:20:28.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tzncd" for this suite.
Dec 31 11:20:34.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:20:34.315: INFO: namespace: e2e-tests-projected-tzncd, resource: bindings, ignored listing per whitelist
Dec 31 11:20:34.359: INFO: namespace e2e-tests-projected-tzncd deletion completed in 6.193046161s

• [SLOW TEST:17.297 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:20:34.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 31 11:20:34.737: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9q4kl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9q4kl/configmaps/e2e-watch-test-resource-version,UID:902cf316-2bbf-11ea-a994-fa163e34d433,ResourceVersion:16675227,Generation:0,CreationTimestamp:2019-12-31 11:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 11:20:34.738: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9q4kl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9q4kl/configmaps/e2e-watch-test-resource-version,UID:902cf316-2bbf-11ea-a994-fa163e34d433,ResourceVersion:16675228,Generation:0,CreationTimestamp:2019-12-31 11:20:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:20:34.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9q4kl" for this suite.
Dec 31 11:20:40.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:20:40.959: INFO: namespace: e2e-tests-watch-9q4kl, resource: bindings, ignored listing per whitelist
Dec 31 11:20:40.973: INFO: namespace e2e-tests-watch-9q4kl deletion completed in 6.223605886s

• [SLOW TEST:6.613 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:20:40.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:20:41.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-lz5lp" to be "success or failure"
Dec 31 11:20:41.326: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 123.321219ms
Dec 31 11:20:43.463: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260791928s
Dec 31 11:20:45.486: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28408429s
Dec 31 11:20:47.741: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.538929894s
Dec 31 11:20:49.901: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.699166521s
Dec 31 11:20:51.934: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.731222088s
STEP: Saw pod success
Dec 31 11:20:51.934: INFO: Pod "downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:20:51.944: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:20:52.789: INFO: Waiting for pod downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:20:53.070: INFO: Pod downwardapi-volume-9417dc21-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:20:53.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lz5lp" for this suite.
Dec 31 11:20:59.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:20:59.336: INFO: namespace: e2e-tests-projected-lz5lp, resource: bindings, ignored listing per whitelist
Dec 31 11:20:59.387: INFO: namespace e2e-tests-projected-lz5lp deletion completed in 6.30065655s

• [SLOW TEST:18.414 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:20:59.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:20:59.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-q5dxk" to be "success or failure"
Dec 31 11:20:59.815: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.217373ms
Dec 31 11:21:02.153: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363987902s
Dec 31 11:21:04.182: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392626962s
Dec 31 11:21:06.598: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808706678s
Dec 31 11:21:08.614: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824317621s
Dec 31 11:21:11.701: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.912272218s
STEP: Saw pod success
Dec 31 11:21:11.702: INFO: Pod "downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:21:11.715: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:21:12.295: INFO: Waiting for pod downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:21:12.308: INFO: Pod downwardapi-volume-9f2e9de2-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:21:12.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-q5dxk" for this suite.
Dec 31 11:21:18.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:21:18.530: INFO: namespace: e2e-tests-downward-api-q5dxk, resource: bindings, ignored listing per whitelist
Dec 31 11:21:18.608: INFO: namespace e2e-tests-downward-api-q5dxk deletion completed in 6.277314881s

• [SLOW TEST:19.221 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:21:18.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 31 11:21:18.813: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:21:19.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g6942" for this suite.
Dec 31 11:21:25.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:21:25.118: INFO: namespace: e2e-tests-kubectl-g6942, resource: bindings, ignored listing per whitelist
Dec 31 11:21:25.297: INFO: namespace e2e-tests-kubectl-g6942 deletion completed in 6.258538071s

• [SLOW TEST:6.689 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:21:25.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:21:25.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-frdcs" to be "success or failure"
Dec 31 11:21:25.491: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.991293ms
Dec 31 11:21:27.506: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029470976s
Dec 31 11:21:29.524: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046842401s
Dec 31 11:21:31.762: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284781786s
Dec 31 11:21:33.791: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314290903s
Dec 31 11:21:35.828: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.3513132s
Dec 31 11:21:38.020: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.543522516s
STEP: Saw pod success
Dec 31 11:21:38.021: INFO: Pod "downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:21:38.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:21:38.535: INFO: Waiting for pod downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:21:38.549: INFO: Pod downwardapi-volume-ae7e9b96-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:21:38.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-frdcs" for this suite.
Dec 31 11:21:44.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:21:44.704: INFO: namespace: e2e-tests-downward-api-frdcs, resource: bindings, ignored listing per whitelist
Dec 31 11:21:44.757: INFO: namespace e2e-tests-downward-api-frdcs deletion completed in 6.200921699s

• [SLOW TEST:19.460 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:21:44.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:21:45.000: INFO: Creating ReplicaSet my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005
Dec 31 11:21:45.130: INFO: Pod name my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005: Found 0 pods out of 1
Dec 31 11:21:50.165: INFO: Pod name my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005: Found 1 pods out of 1
Dec 31 11:21:50.165: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005" is running
Dec 31 11:21:58.309: INFO: Pod "my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005-9tz9f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 11:21:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 11:21:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 11:21:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 11:21:45 +0000 UTC Reason: Message:}])
Dec 31 11:21:58.310: INFO: Trying to dial the pod
Dec 31 11:22:03.356: INFO: Controller my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005-9tz9f]: "my-hostname-basic-ba25be54-2bbf-11ea-a129-0242ac110005-9tz9f", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:22:03.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-69qp6" for this suite.
Dec 31 11:22:11.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:22:11.540: INFO: namespace: e2e-tests-replicaset-69qp6, resource: bindings, ignored listing per whitelist
Dec 31 11:22:11.715: INFO: namespace e2e-tests-replicaset-69qp6 deletion completed in 8.348365874s

• [SLOW TEST:26.957 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:22:11.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-v89s
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 11:22:14.525: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v89s" in namespace "e2e-tests-subpath-vvzcv" to be "success or failure"
Dec 31 11:22:14.704: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 178.769686ms
Dec 31 11:22:17.339: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.813574837s
Dec 31 11:22:19.421: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.895364686s
Dec 31 11:22:21.712: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.186743424s
Dec 31 11:22:23.736: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 9.210171323s
Dec 31 11:22:25.758: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 11.232117999s
Dec 31 11:22:28.011: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 13.485399624s
Dec 31 11:22:30.019: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 15.493846024s
Dec 31 11:22:32.044: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Pending", Reason="", readiness=false. Elapsed: 17.51875669s
Dec 31 11:22:34.060: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 19.534337297s
Dec 31 11:22:36.075: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 21.549513554s
Dec 31 11:22:38.092: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 23.566418441s
Dec 31 11:22:40.118: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 25.592049017s
Dec 31 11:22:42.135: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 27.609729858s
Dec 31 11:22:44.145: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 29.619661884s
Dec 31 11:22:46.162: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 31.636218419s
Dec 31 11:22:48.202: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 33.675948722s
Dec 31 11:22:50.246: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Running", Reason="", readiness=false. Elapsed: 35.7207181s
Dec 31 11:22:52.262: INFO: Pod "pod-subpath-test-configmap-v89s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.736542649s
STEP: Saw pod success
Dec 31 11:22:52.262: INFO: Pod "pod-subpath-test-configmap-v89s" satisfied condition "success or failure"
Dec 31 11:22:52.268: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-v89s container test-container-subpath-configmap-v89s: 
STEP: delete the pod
Dec 31 11:22:53.695: INFO: Waiting for pod pod-subpath-test-configmap-v89s to disappear
Dec 31 11:22:53.740: INFO: Pod pod-subpath-test-configmap-v89s no longer exists
STEP: Deleting pod pod-subpath-test-configmap-v89s
Dec 31 11:22:53.740: INFO: Deleting pod "pod-subpath-test-configmap-v89s" in namespace "e2e-tests-subpath-vvzcv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:22:53.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vvzcv" for this suite.
Dec 31 11:23:00.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:23:00.305: INFO: namespace: e2e-tests-subpath-vvzcv, resource: bindings, ignored listing per whitelist
Dec 31 11:23:00.367: INFO: namespace e2e-tests-subpath-vvzcv deletion completed in 6.362308025s

• [SLOW TEST:48.651 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:23:00.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:23:00.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-t7fq2" to be "success or failure"
Dec 31 11:23:00.696: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.022005ms
Dec 31 11:23:02.720: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04358038s
Dec 31 11:23:04.742: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064935499s
Dec 31 11:23:07.174: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497270579s
Dec 31 11:23:09.207: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529755079s
Dec 31 11:23:11.228: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.551584487s
STEP: Saw pod success
Dec 31 11:23:11.228: INFO: Pod "downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:23:11.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:23:11.431: INFO: Waiting for pod downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005 to disappear
Dec 31 11:23:11.573: INFO: Pod downwardapi-volume-e7339567-2bbf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:23:11.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t7fq2" for this suite.
Dec 31 11:23:17.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:23:17.663: INFO: namespace: e2e-tests-downward-api-t7fq2, resource: bindings, ignored listing per whitelist
Dec 31 11:23:17.805: INFO: namespace e2e-tests-downward-api-t7fq2 deletion completed in 6.216838199s

• [SLOW TEST:17.438 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:23:17.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jh9fd
Dec 31 11:23:28.155: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jh9fd
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 11:23:28.160: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:27:29.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jh9fd" for this suite.
Dec 31 11:27:38.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:27:38.264: INFO: namespace: e2e-tests-container-probe-jh9fd, resource: bindings, ignored listing per whitelist
Dec 31 11:27:38.337: INFO: namespace e2e-tests-container-probe-jh9fd deletion completed in 8.340904136s

• [SLOW TEST:260.532 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:27:38.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 31 11:27:38.740: INFO: Waiting up to 5m0s for pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-w9x5d" to be "success or failure"
Dec 31 11:27:38.752: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.285834ms
Dec 31 11:27:40.788: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047589693s
Dec 31 11:27:42.803: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062308425s
Dec 31 11:27:45.245: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504618494s
Dec 31 11:27:47.277: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536404003s
Dec 31 11:27:49.304: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.56316577s
STEP: Saw pod success
Dec 31 11:27:49.304: INFO: Pod "pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:27:49.343: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:27:49.486: INFO: Waiting for pod pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005 to disappear
Dec 31 11:27:49.565: INFO: Pod pod-8cf5a5f3-2bc0-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:27:49.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w9x5d" for this suite.
Dec 31 11:27:55.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:27:55.874: INFO: namespace: e2e-tests-emptydir-w9x5d, resource: bindings, ignored listing per whitelist
Dec 31 11:27:56.003: INFO: namespace e2e-tests-emptydir-w9x5d deletion completed in 6.30054295s

• [SLOW TEST:17.666 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:27:56.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 31 11:27:56.379: INFO: Waiting up to 5m0s for pod "pod-977223a6-2bc0-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-cqldf" to be "success or failure"
Dec 31 11:27:56.423: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.553168ms
Dec 31 11:27:58.585: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206215651s
Dec 31 11:28:00.650: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270993718s
Dec 31 11:28:02.676: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297201885s
Dec 31 11:28:04.710: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331349315s
Dec 31 11:28:06.733: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354551879s
STEP: Saw pod success
Dec 31 11:28:06.733: INFO: Pod "pod-977223a6-2bc0-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:28:06.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-977223a6-2bc0-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:28:06.939: INFO: Waiting for pod pod-977223a6-2bc0-11ea-a129-0242ac110005 to disappear
Dec 31 11:28:06.959: INFO: Pod pod-977223a6-2bc0-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:28:06.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cqldf" for this suite.
Dec 31 11:28:13.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:28:13.260: INFO: namespace: e2e-tests-emptydir-cqldf, resource: bindings, ignored listing per whitelist
Dec 31 11:28:13.276: INFO: namespace e2e-tests-emptydir-cqldf deletion completed in 6.302153476s

• [SLOW TEST:17.273 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:28:13.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-qcr2
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 11:28:13.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qcr2" in namespace "e2e-tests-subpath-z2bz8" to be "success or failure"
Dec 31 11:28:13.812: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.831606ms
Dec 31 11:28:16.118: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320830996s
Dec 31 11:28:18.133: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335871157s
Dec 31 11:28:20.148: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350540793s
Dec 31 11:28:22.168: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370223994s
Dec 31 11:28:24.191: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.393656861s
Dec 31 11:28:26.199: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.401629829s
Dec 31 11:28:28.212: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.414177555s
Dec 31 11:28:30.221: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 16.423161805s
Dec 31 11:28:32.238: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 18.440248823s
Dec 31 11:28:34.253: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 20.45577474s
Dec 31 11:28:36.279: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 22.481425449s
Dec 31 11:28:38.295: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 24.49711382s
Dec 31 11:28:40.330: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 26.532341609s
Dec 31 11:28:42.341: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 28.543185587s
Dec 31 11:28:44.353: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 30.555489718s
Dec 31 11:28:46.367: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 32.569831332s
Dec 31 11:28:48.406: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Running", Reason="", readiness=false. Elapsed: 34.608470105s
Dec 31 11:28:51.105: INFO: Pod "pod-subpath-test-projected-qcr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.307751282s
STEP: Saw pod success
Dec 31 11:28:51.105: INFO: Pod "pod-subpath-test-projected-qcr2" satisfied condition "success or failure"
Dec 31 11:28:51.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-qcr2 container test-container-subpath-projected-qcr2: 
STEP: delete the pod
Dec 31 11:28:51.374: INFO: Waiting for pod pod-subpath-test-projected-qcr2 to disappear
Dec 31 11:28:51.435: INFO: Pod pod-subpath-test-projected-qcr2 no longer exists
STEP: Deleting pod pod-subpath-test-projected-qcr2
Dec 31 11:28:51.435: INFO: Deleting pod "pod-subpath-test-projected-qcr2" in namespace "e2e-tests-subpath-z2bz8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:28:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-z2bz8" for this suite.
Dec 31 11:28:59.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:28:59.607: INFO: namespace: e2e-tests-subpath-z2bz8, resource: bindings, ignored listing per whitelist
Dec 31 11:28:59.800: INFO: namespace e2e-tests-subpath-z2bz8 deletion completed in 8.354166409s

• [SLOW TEST:46.524 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:28:59.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 31 11:29:00.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xm9sm'
Dec 31 11:29:00.528: INFO: stderr: ""
Dec 31 11:29:00.528: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 31 11:29:01.541: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:01.541: INFO: Found 0 / 1
Dec 31 11:29:02.736: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:02.736: INFO: Found 0 / 1
Dec 31 11:29:03.577: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:03.577: INFO: Found 0 / 1
Dec 31 11:29:04.557: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:04.557: INFO: Found 0 / 1
Dec 31 11:29:05.604: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:05.604: INFO: Found 0 / 1
Dec 31 11:29:06.858: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:06.858: INFO: Found 0 / 1
Dec 31 11:29:07.602: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:07.602: INFO: Found 0 / 1
Dec 31 11:29:08.558: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:08.558: INFO: Found 0 / 1
Dec 31 11:29:09.561: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:09.561: INFO: Found 0 / 1
Dec 31 11:29:10.585: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:10.586: INFO: Found 0 / 1
Dec 31 11:29:11.546: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:11.546: INFO: Found 1 / 1
Dec 31 11:29:11.546: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 31 11:29:11.554: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:29:11.554: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 31 11:29:11.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm'
Dec 31 11:29:11.805: INFO: stderr: ""
Dec 31 11:29:11.805: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Dec 11:29:09.406 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 11:29:09.406 # Server started, Redis version 3.2.12\n1:M 31 Dec 11:29:09.407 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 11:29:09.407 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 31 11:29:11.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm --tail=1'
Dec 31 11:29:12.003: INFO: stderr: ""
Dec 31 11:29:12.004: INFO: stdout: "1:M 31 Dec 11:29:09.407 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 31 11:29:12.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm --limit-bytes=1'
Dec 31 11:29:12.332: INFO: stderr: ""
Dec 31 11:29:12.332: INFO: stdout: " "
STEP: exposing timestamps
Dec 31 11:29:12.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm --tail=1 --timestamps'
Dec 31 11:29:12.698: INFO: stderr: ""
Dec 31 11:29:12.698: INFO: stdout: "2019-12-31T11:29:09.409665597Z 1:M 31 Dec 11:29:09.407 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 31 11:29:15.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm --since=1s'
Dec 31 11:29:15.414: INFO: stderr: ""
Dec 31 11:29:15.415: INFO: stdout: ""
Dec 31 11:29:15.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5qq5q redis-master --namespace=e2e-tests-kubectl-xm9sm --since=24h'
Dec 31 11:29:15.650: INFO: stderr: ""
Dec 31 11:29:15.650: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Dec 11:29:09.406 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 11:29:09.406 # Server started, Redis version 3.2.12\n1:M 31 Dec 11:29:09.407 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 11:29:09.407 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 31 11:29:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xm9sm'
Dec 31 11:29:15.801: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 11:29:15.801: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 31 11:29:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xm9sm'
Dec 31 11:29:16.252: INFO: stderr: "No resources found.\n"
Dec 31 11:29:16.252: INFO: stdout: ""
Dec 31 11:29:16.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xm9sm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 31 11:29:16.401: INFO: stderr: ""
Dec 31 11:29:16.401: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:29:16.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xm9sm" for this suite.
Dec 31 11:29:40.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:29:40.704: INFO: namespace: e2e-tests-kubectl-xm9sm, resource: bindings, ignored listing per whitelist
Dec 31 11:29:40.733: INFO: namespace e2e-tests-kubectl-xm9sm deletion completed in 24.315984929s

• [SLOW TEST:40.932 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:29:40.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d5c68340-2bc0-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:29:40.918: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-tg5wd" to be "success or failure"
Dec 31 11:29:40.930: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.728118ms
Dec 31 11:29:42.946: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028402315s
Dec 31 11:29:44.961: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043455046s
Dec 31 11:29:47.274: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356128697s
Dec 31 11:29:49.285: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367344963s
Dec 31 11:29:51.301: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.383147947s
STEP: Saw pod success
Dec 31 11:29:51.301: INFO: Pod "pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:29:51.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 11:29:51.901: INFO: Waiting for pod pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005 to disappear
Dec 31 11:29:52.012: INFO: Pod pod-projected-configmaps-d5c74aa9-2bc0-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:29:52.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tg5wd" for this suite.
Dec 31 11:29:58.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:29:58.294: INFO: namespace: e2e-tests-projected-tg5wd, resource: bindings, ignored listing per whitelist
Dec 31 11:29:58.324: INFO: namespace e2e-tests-projected-tg5wd deletion completed in 6.294819077s

• [SLOW TEST:17.591 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:29:58.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 31 11:30:08.717: INFO: Pod pod-hostip-e0580cc8-2bc0-11ea-a129-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:30:08.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wlh6w" for this suite.
Dec 31 11:30:32.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:30:32.984: INFO: namespace: e2e-tests-pods-wlh6w, resource: bindings, ignored listing per whitelist
Dec 31 11:30:33.022: INFO: namespace e2e-tests-pods-wlh6w deletion completed in 24.279837581s

• [SLOW TEST:34.698 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:30:33.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 11:30:33.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv24j'
Dec 31 11:30:35.233: INFO: stderr: ""
Dec 31 11:30:35.233: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 31 11:30:45.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv24j -o json'
Dec 31 11:30:45.483: INFO: stderr: ""
Dec 31 11:30:45.483: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-31T11:30:35Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-bv24j\",\n        \"resourceVersion\": \"16676323\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-bv24j/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f62c0a74-2bc0-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-kn4r6\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-kn4r6\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-kn4r6\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T11:30:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T11:30:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T11:30:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T11:30:35Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b6e3e86e6cdfe6a43879109019fca275c3951fb09ffb805c086c107df419c697\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-31T11:30:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-31T11:30:35Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 31 11:30:45.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-bv24j'
Dec 31 11:30:46.050: INFO: stderr: ""
Dec 31 11:30:46.050: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 31 11:30:46.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv24j'
Dec 31 11:30:55.282: INFO: stderr: ""
Dec 31 11:30:55.282: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:30:55.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bv24j" for this suite.
Dec 31 11:31:01.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:31:01.501: INFO: namespace: e2e-tests-kubectl-bv24j, resource: bindings, ignored listing per whitelist
Dec 31 11:31:01.529: INFO: namespace e2e-tests-kubectl-bv24j deletion completed in 6.238488492s

• [SLOW TEST:28.507 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:31:01.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 31 11:31:01.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-4h7vw run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 31 11:31:12.966: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 31 11:31:12.966: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:31:15.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4h7vw" for this suite.
Dec 31 11:31:22.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:31:22.570: INFO: namespace: e2e-tests-kubectl-4h7vw, resource: bindings, ignored listing per whitelist
Dec 31 11:31:22.635: INFO: namespace e2e-tests-kubectl-4h7vw deletion completed in 6.683393082s

• [SLOW TEST:21.106 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:31:22.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 31 11:31:22.847: INFO: Waiting up to 5m0s for pod "pod-12903138-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-wp4j6" to be "success or failure"
Dec 31 11:31:22.861: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.119913ms
Dec 31 11:31:25.784: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936559117s
Dec 31 11:31:27.792: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944933694s
Dec 31 11:31:30.272: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.424506562s
Dec 31 11:31:32.286: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.438977731s
Dec 31 11:31:34.297: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.449240036s
STEP: Saw pod success
Dec 31 11:31:34.297: INFO: Pod "pod-12903138-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:31:34.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-12903138-2bc1-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:31:34.503: INFO: Waiting for pod pod-12903138-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:31:34.527: INFO: Pod pod-12903138-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:31:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wp4j6" for this suite.
Dec 31 11:31:41.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:31:41.492: INFO: namespace: e2e-tests-emptydir-wp4j6, resource: bindings, ignored listing per whitelist
Dec 31 11:31:41.570: INFO: namespace e2e-tests-emptydir-wp4j6 deletion completed in 7.03058036s

• [SLOW TEST:18.934 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:31:41.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:31:41.879: INFO: Creating deployment "nginx-deployment"
Dec 31 11:31:41.911: INFO: Waiting for observed generation 1
Dec 31 11:31:44.612: INFO: Waiting for all required pods to come up
Dec 31 11:31:45.719: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 31 11:32:27.844: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 31 11:32:27.861: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 31 11:32:27.877: INFO: Updating deployment nginx-deployment
Dec 31 11:32:27.877: INFO: Waiting for observed generation 2
Dec 31 11:32:30.150: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 31 11:32:30.165: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 31 11:32:31.792: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 31 11:32:32.481: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 31 11:32:32.481: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 31 11:32:32.511: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 31 11:32:33.347: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 31 11:32:33.348: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 31 11:32:33.368: INFO: Updating deployment nginx-deployment
Dec 31 11:32:33.368: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 31 11:32:34.360: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 31 11:32:40.398: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 31 11:32:42.362: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjmkt/deployments/nginx-deployment,UID:1deb5a38-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676762,Generation:3,CreationTimestamp:2019-12-31 11:31:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-31 11:32:34 +0000 UTC 2019-12-31 11:32:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-31 11:32:40 +0000 UTC 2019-12-31 11:31:42 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 31 11:32:44.051: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjmkt/replicasets/nginx-deployment-5c98f8fb5,UID:395b0036-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676756,Generation:3,CreationTimestamp:2019-12-31 11:32:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1deb5a38-2bc1-11ea-a994-fa163e34d433 0xc001f3d677 0xc001f3d678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 11:32:44.051: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 31 11:32:44.052: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjmkt/replicasets/nginx-deployment-85ddf47c5d,UID:1e016436-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676735,Generation:3,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1deb5a38-2bc1-11ea-a994-fa163e34d433 0xc001f3d787 0xc001f3d788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 31 11:32:44.613: INFO: Pod "nginx-deployment-5c98f8fb5-2bcr2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2bcr2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-2bcr2,UID:39709be9-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676679,Generation:0,CreationTimestamp:2019-12-31 11:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd180 0xc001ccd181}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.613: INFO: Pod "nginx-deployment-5c98f8fb5-57lbk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-57lbk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-57lbk,UID:3e99135b-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676744,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd2d7 0xc001ccd2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.613: INFO: Pod "nginx-deployment-5c98f8fb5-5zsmj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5zsmj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-5zsmj,UID:3ec4fd77-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676754,Generation:0,CreationTimestamp:2019-12-31 11:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd3d7 0xc001ccd3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.614: INFO: Pod "nginx-deployment-5c98f8fb5-9ggrv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9ggrv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-9ggrv,UID:3e73b0b1-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676736,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd4d7 0xc001ccd4d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.614: INFO: Pod "nginx-deployment-5c98f8fb5-9tjzt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9tjzt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-9tjzt,UID:3e9906b9-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676748,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd5d7 0xc001ccd5d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.614: INFO: Pod "nginx-deployment-5c98f8fb5-bmr6g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bmr6g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-bmr6g,UID:3e73e246-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676734,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd6d7 0xc001ccd6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.615: INFO: Pod "nginx-deployment-5c98f8fb5-d7k5k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d7k5k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-d7k5k,UID:396963ec-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676655,Generation:0,CreationTimestamp:2019-12-31 11:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd7d7 0xc001ccd7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.615: INFO: Pod "nginx-deployment-5c98f8fb5-dqwx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dqwx7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-dqwx7,UID:3970b1e2-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676676,Generation:0,CreationTimestamp:2019-12-31 11:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccd927 0xc001ccd928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.615: INFO: Pod "nginx-deployment-5c98f8fb5-lbrn5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lbrn5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-lbrn5,UID:39c40482-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676721,Generation:0,CreationTimestamp:2019-12-31 11:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccda77 0xc001ccda78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.615: INFO: Pod "nginx-deployment-5c98f8fb5-rxf67" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rxf67,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-rxf67,UID:3e603910-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676724,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccdbc7 0xc001ccdbc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdc30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.616: INFO: Pod "nginx-deployment-5c98f8fb5-v5d9t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v5d9t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-v5d9t,UID:3e999ee4-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676747,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccdcc7 0xc001ccdcc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdd30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.616: INFO: Pod "nginx-deployment-5c98f8fb5-w5g7v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w5g7v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-w5g7v,UID:39c038c4-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676682,Generation:0,CreationTimestamp:2019-12-31 11:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccde57 0xc001ccde58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.616: INFO: Pod "nginx-deployment-5c98f8fb5-z9sl6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z9sl6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-5c98f8fb5-z9sl6,UID:3e99880a-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676746,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 395b0036-2bc1-11ea-a994-fa163e34d433 0xc001ccdfa7 0xc001ccdfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a40030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.616: INFO: Pod "nginx-deployment-85ddf47c5d-47t5c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-47t5c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-47t5c,UID:3e094bec-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676708,Generation:0,CreationTimestamp:2019-12-31 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40247 0xc001a40248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a402b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a402d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.616: INFO: Pod "nginx-deployment-85ddf47c5d-4bdk6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4bdk6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-4bdk6,UID:1e3617ab-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676623,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40417 0xc001a40418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a404a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-31 11:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0cb050be3106d15f698a66fa3be316b9885aaf252f48c7d61a00084bfd8f8718}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.617: INFO: Pod "nginx-deployment-85ddf47c5d-5m5fm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5m5fm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-5m5fm,UID:3d6a4d8d-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676769,Generation:0,CreationTimestamp:2019-12-31 11:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40927 0xc001a40928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a409b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.617: INFO: Pod "nginx-deployment-85ddf47c5d-8sxcw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8sxcw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-8sxcw,UID:1e0c9293-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676598,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40a67 0xc001a40a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a40b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-31 11:31:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3691985432b22141d4f0cad76575b89a4c38bc105d8aa109d208a8fd2b700e36}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.617: INFO: Pod "nginx-deployment-85ddf47c5d-9kfzp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9kfzp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-9kfzp,UID:1e366f31-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676604,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40cf7 0xc001a40cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a40d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-31 11:31:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e0bb2bf6abd239c8902d9115346417d31863cf4fbc02001c342fc2b97e45daa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.617: INFO: Pod "nginx-deployment-85ddf47c5d-b5v24" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5v24,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-b5v24,UID:3e375313-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676727,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a40f87 0xc001a40f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a40ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-bh7ch" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bh7ch,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-bh7ch,UID:3d695223-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676763,Generation:0,CreationTimestamp:2019-12-31 11:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41087 0xc001a41088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-bwdhc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bwdhc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-bwdhc,UID:1e22781d-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676627,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41237 0xc001a41238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a412a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a412c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-31 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0d5adcf03165578fc1d8aa05b735027c27bb1190880bc5d041f63d450e76110d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-ctqm5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ctqm5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-ctqm5,UID:3e36d70e-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676720,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41387 0xc001a41388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a413f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-jb5nm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jb5nm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-jb5nm,UID:1e2217ee-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676607,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41487 0xc001a41488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a414f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-31 11:31:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a12bb135ad5f32f8b7ab8a62b1692975ca1b1f10dbc680ad4386a8bd5fbf509f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-jn7tk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jn7tk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-jn7tk,UID:3e09a630-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676709,Generation:0,CreationTimestamp:2019-12-31 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a415d7 0xc001a415d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-kwzgm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kwzgm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-kwzgm,UID:3e08f702-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676725,Generation:0,CreationTimestamp:2019-12-31 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a416d7 0xc001a416d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.618: INFO: Pod "nginx-deployment-85ddf47c5d-l8rr7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l8rr7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-l8rr7,UID:1e0e0f50-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676562,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a417d7 0xc001a417d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-31 11:31:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1ad02f228ac999b8f3c58807f19eb2e9fcbb114d4902cac02edf3a788969770e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-l9mnv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l9mnv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-l9mnv,UID:3e370553-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676722,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41927 0xc001a41928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a419b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-lz69p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lz69p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-lz69p,UID:3e37303e-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676723,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41a27 0xc001a41a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-r55kj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r55kj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-r55kj,UID:1e228720-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676613,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41b27 0xc001a41b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-31 11:31:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fa6cef2b870f8560cffaa702f759d8c9764e6e1e875832f8acd89921e671cc8b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-sdj9k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sdj9k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-sdj9k,UID:3e097458-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676710,Generation:0,CreationTimestamp:2019-12-31 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41cf7 0xc001a41cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a41d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a41d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-tvfbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tvfbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-tvfbf,UID:3d30b998-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676750,Generation:0,CreationTimestamp:2019-12-31 11:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001a41df7 0xc001a41df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d86000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d86020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-31 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.619: INFO: Pod "nginx-deployment-85ddf47c5d-vlpw9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vlpw9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-vlpw9,UID:3e376d80-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676726,Generation:0,CreationTimestamp:2019-12-31 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001d860d7 0xc001d860d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d86140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d86160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 31 11:32:44.620: INFO: Pod "nginx-deployment-85ddf47c5d-zsnd6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zsnd6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bjmkt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjmkt/pods/nginx-deployment-85ddf47c5d-zsnd6,UID:1e0dd7f7-2bc1-11ea-a994-fa163e34d433,ResourceVersion:16676571,Generation:0,CreationTimestamp:2019-12-31 11:31:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1e016436-2bc1-11ea-a994-fa163e34d433 0xc001d861d7 0xc001d861d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-64jgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-64jgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-64jgh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d86240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d86260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:32:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:31:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-31 11:31:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 11:32:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fd39bd9f9bbfa277f7ae107fe468750001d6e2d82d04ee669146118f6fae840e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:32:44.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-bjmkt" for this suite.
Dec 31 11:34:00.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:34:00.995: INFO: namespace: e2e-tests-deployment-bjmkt, resource: bindings, ignored listing per whitelist
Dec 31 11:34:01.076: INFO: namespace e2e-tests-deployment-bjmkt deletion completed in 1m15.179038436s

• [SLOW TEST:139.505 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:34:01.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 31 11:34:05.340: INFO: Waiting up to 5m0s for pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-ll5qr" to be "success or failure"
Dec 31 11:34:06.718: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.378606222s
Dec 31 11:34:09.126: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786570482s
Dec 31 11:34:11.854: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513911185s
Dec 31 11:34:14.306: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966619168s
Dec 31 11:34:16.348: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.007639617s
Dec 31 11:34:18.505: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.164710404s
Dec 31 11:34:20.567: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.227293212s
Dec 31 11:34:22.582: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.241730783s
Dec 31 11:34:24.900: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.560406308s
Dec 31 11:34:26.923: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.582844575s
Dec 31 11:34:28.949: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.608616937s
Dec 31 11:34:30.978: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.637716207s
Dec 31 11:34:33.136: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.795981448s
Dec 31 11:34:35.152: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.812449072s
Dec 31 11:34:37.527: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.186643588s
Dec 31 11:34:39.971: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.63157288s
Dec 31 11:34:41.999: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.659005839s
Dec 31 11:34:44.037: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.697226171s
Dec 31 11:34:46.043: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.703597996s
Dec 31 11:34:48.121: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.781026364s
Dec 31 11:34:50.166: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.826537025s
Dec 31 11:34:52.222: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.881655789s
STEP: Saw pod success
Dec 31 11:34:52.222: INFO: Pod "pod-72acd5d2-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:34:52.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-72acd5d2-2bc1-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:34:52.711: INFO: Waiting for pod pod-72acd5d2-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:34:52.726: INFO: Pod pod-72acd5d2-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:34:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ll5qr" for this suite.
Dec 31 11:34:58.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:34:58.975: INFO: namespace: e2e-tests-emptydir-ll5qr, resource: bindings, ignored listing per whitelist
Dec 31 11:34:59.006: INFO: namespace e2e-tests-emptydir-ll5qr deletion completed in 6.269582843s

• [SLOW TEST:57.931 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:34:59.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 31 11:34:59.220: INFO: Waiting up to 5m0s for pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-nx48p" to be "success or failure"
Dec 31 11:34:59.228: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394104ms
Dec 31 11:35:01.497: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276686248s
Dec 31 11:35:03.537: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317565364s
Dec 31 11:35:06.071: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851092027s
Dec 31 11:35:08.088: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868024808s
Dec 31 11:35:10.101: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880727037s
STEP: Saw pod success
Dec 31 11:35:10.101: INFO: Pod "downward-api-93882199-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:35:10.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-93882199-2bc1-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 11:35:10.368: INFO: Waiting for pod downward-api-93882199-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:35:10.382: INFO: Pod downward-api-93882199-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:35:10.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nx48p" for this suite.
Dec 31 11:35:16.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:35:16.852: INFO: namespace: e2e-tests-downward-api-nx48p, resource: bindings, ignored listing per whitelist
Dec 31 11:35:16.931: INFO: namespace e2e-tests-downward-api-nx48p deletion completed in 6.435398726s

• [SLOW TEST:17.924 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:35:16.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-9e35cadb-2bc1-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:35:17.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-6hcdp" to be "success or failure"
Dec 31 11:35:17.244: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.541464ms
Dec 31 11:35:19.254: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037636314s
Dec 31 11:35:21.272: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055426236s
Dec 31 11:35:23.716: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499622437s
Dec 31 11:35:25.731: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514803545s
Dec 31 11:35:27.744: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.527776767s
STEP: Saw pod success
Dec 31 11:35:27.744: INFO: Pod "pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:35:27.750: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 11:35:27.878: INFO: Waiting for pod pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:35:27.949: INFO: Pod pod-configmaps-9e402645-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:35:27.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6hcdp" for this suite.
Dec 31 11:35:33.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:35:34.156: INFO: namespace: e2e-tests-configmap-6hcdp, resource: bindings, ignored listing per whitelist
Dec 31 11:35:34.258: INFO: namespace e2e-tests-configmap-6hcdp deletion completed in 6.30234957s

• [SLOW TEST:17.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:35:34.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 31 11:35:34.496: INFO: Waiting up to 5m0s for pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-containers-drpxt" to be "success or failure"
Dec 31 11:35:34.560: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.837195ms
Dec 31 11:35:36.602: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106214351s
Dec 31 11:35:38.646: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150242539s
Dec 31 11:35:40.992: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496263931s
Dec 31 11:35:43.014: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517752048s
Dec 31 11:35:45.028: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.532350979s
STEP: Saw pod success
Dec 31 11:35:45.028: INFO: Pod "client-containers-a8890c86-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:35:45.035: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a8890c86-2bc1-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:35:45.263: INFO: Waiting for pod client-containers-a8890c86-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:35:45.295: INFO: Pod client-containers-a8890c86-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:35:45.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-drpxt" for this suite.
Dec 31 11:35:53.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:35:53.551: INFO: namespace: e2e-tests-containers-drpxt, resource: bindings, ignored listing per whitelist
Dec 31 11:35:53.579: INFO: namespace e2e-tests-containers-drpxt deletion completed in 8.278123512s

• [SLOW TEST:19.320 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:35:53.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-mdkbb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mdkbb to expose endpoints map[]
Dec 31 11:35:54.093: INFO: Get endpoints failed (39.204938ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 31 11:35:55.111: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mdkbb exposes endpoints map[] (1.05726715s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-mdkbb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mdkbb to expose endpoints map[pod1:[100]]
Dec 31 11:36:00.270: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.126950294s elapsed, will retry)
Dec 31 11:36:04.739: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mdkbb exposes endpoints map[pod1:[100]] (9.596029316s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-mdkbb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mdkbb to expose endpoints map[pod1:[100] pod2:[101]]
Dec 31 11:36:09.784: INFO: Unexpected endpoints: found map[b4dbbe4a-2bc1-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.010934199s elapsed, will retry)
Dec 31 11:36:15.333: INFO: Unexpected endpoints: found map[b4dbbe4a-2bc1-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (10.560750733s elapsed, will retry)
Dec 31 11:36:16.364: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mdkbb exposes endpoints map[pod1:[100] pod2:[101]] (11.591136045s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-mdkbb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mdkbb to expose endpoints map[pod2:[101]]
Dec 31 11:36:17.440: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mdkbb exposes endpoints map[pod2:[101]] (1.046089513s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-mdkbb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mdkbb to expose endpoints map[]
Dec 31 11:36:18.908: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mdkbb exposes endpoints map[] (1.45509187s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:36:19.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-mdkbb" for this suite.
Dec 31 11:36:43.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:36:43.266: INFO: namespace: e2e-tests-services-mdkbb, resource: bindings, ignored listing per whitelist
Dec 31 11:36:43.356: INFO: namespace e2e-tests-services-mdkbb deletion completed in 24.23968168s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.777 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:36:43.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-zgp7g
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zgp7g to expose endpoints map[]
Dec 31 11:36:43.740: INFO: Get endpoints failed (96.510309ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 31 11:36:44.763: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zgp7g exposes endpoints map[] (1.118896343s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zgp7g
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zgp7g to expose endpoints map[pod1:[80]]
Dec 31 11:36:49.136: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.327081926s elapsed, will retry)
Dec 31 11:36:54.868: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.058321154s elapsed, will retry)
Dec 31 11:36:56.909: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zgp7g exposes endpoints map[pod1:[80]] (12.099855441s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zgp7g
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zgp7g to expose endpoints map[pod1:[80] pod2:[80]]
Dec 31 11:37:01.525: INFO: Unexpected endpoints: found map[d274e1ab-2bc1-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.590313939s elapsed, will retry)
Dec 31 11:37:06.898: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zgp7g exposes endpoints map[pod1:[80] pod2:[80]] (9.9632867s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zgp7g
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zgp7g to expose endpoints map[pod2:[80]]
Dec 31 11:37:08.512: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zgp7g exposes endpoints map[pod2:[80]] (1.606965108s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zgp7g
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zgp7g to expose endpoints map[]
Dec 31 11:37:08.904: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zgp7g exposes endpoints map[] (346.116145ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:37:10.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zgp7g" for this suite.
Dec 31 11:37:34.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:37:34.983: INFO: namespace: e2e-tests-services-zgp7g, resource: bindings, ignored listing per whitelist
Dec 31 11:37:35.018: INFO: namespace e2e-tests-services-zgp7g deletion completed in 24.242465212s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.662 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:37:35.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 31 11:37:35.262: INFO: Waiting up to 5m0s for pod "pod-f07df361-2bc1-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-fpn27" to be "success or failure"
Dec 31 11:37:35.280: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.665577ms
Dec 31 11:37:37.318: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055770777s
Dec 31 11:37:39.337: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07510398s
Dec 31 11:37:41.671: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409190618s
Dec 31 11:37:43.682: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.420597243s
Dec 31 11:37:45.706: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443760018s
STEP: Saw pod success
Dec 31 11:37:45.706: INFO: Pod "pod-f07df361-2bc1-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:37:45.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f07df361-2bc1-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:37:45.973: INFO: Waiting for pod pod-f07df361-2bc1-11ea-a129-0242ac110005 to disappear
Dec 31 11:37:45.982: INFO: Pod pod-f07df361-2bc1-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:37:45.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fpn27" for this suite.
Dec 31 11:37:52.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:37:52.246: INFO: namespace: e2e-tests-emptydir-fpn27, resource: bindings, ignored listing per whitelist
Dec 31 11:37:52.372: INFO: namespace e2e-tests-emptydir-fpn27 deletion completed in 6.374293139s

• [SLOW TEST:17.354 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:37:52.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 11:37:52.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rjkqc'
Dec 31 11:37:52.831: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 11:37:52.831: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 31 11:37:52.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-rjkqc'
Dec 31 11:37:53.179: INFO: stderr: ""
Dec 31 11:37:53.180: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:37:53.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rjkqc" for this suite.
Dec 31 11:38:15.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:38:15.351: INFO: namespace: e2e-tests-kubectl-rjkqc, resource: bindings, ignored listing per whitelist
Dec 31 11:38:15.386: INFO: namespace e2e-tests-kubectl-rjkqc deletion completed in 22.175211693s

• [SLOW TEST:23.013 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:38:15.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-088a5c69-2bc2-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 11:38:15.598: INFO: Waiting up to 5m0s for pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-vhzwp" to be "success or failure"
Dec 31 11:38:15.612: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.716804ms
Dec 31 11:38:17.975: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376229078s
Dec 31 11:38:20.008: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409985063s
Dec 31 11:38:22.672: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073880371s
Dec 31 11:38:24.703: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.104203903s
Dec 31 11:38:26.730: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.131933861s
STEP: Saw pod success
Dec 31 11:38:26.730: INFO: Pod "pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:38:26.748: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 11:38:27.123: INFO: Waiting for pod pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:38:27.252: INFO: Pod pod-secrets-088ceab4-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:38:27.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vhzwp" for this suite.
Dec 31 11:38:33.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:38:33.549: INFO: namespace: e2e-tests-secrets-vhzwp, resource: bindings, ignored listing per whitelist
Dec 31 11:38:33.655: INFO: namespace e2e-tests-secrets-vhzwp deletion completed in 6.380206738s

• [SLOW TEST:18.269 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:38:33.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 11:38:33.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-bcbsj'
Dec 31 11:38:34.170: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 11:38:34.170: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 31 11:38:34.237: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cn2q8]
Dec 31 11:38:34.237: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cn2q8" in namespace "e2e-tests-kubectl-bcbsj" to be "running and ready"
Dec 31 11:38:34.287: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.119564ms
Dec 31 11:38:36.462: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225231604s
Dec 31 11:38:38.482: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245138626s
Dec 31 11:38:40.515: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277711821s
Dec 31 11:38:42.575: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338287993s
Dec 31 11:38:44.620: INFO: Pod "e2e-test-nginx-rc-cn2q8": Phase="Running", Reason="", readiness=true. Elapsed: 10.382671154s
Dec 31 11:38:44.620: INFO: Pod "e2e-test-nginx-rc-cn2q8" satisfied condition "running and ready"
Dec 31 11:38:44.620: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cn2q8]
Dec 31 11:38:44.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bcbsj'
Dec 31 11:38:44.944: INFO: stderr: ""
Dec 31 11:38:44.944: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 31 11:38:44.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bcbsj'
Dec 31 11:38:45.203: INFO: stderr: ""
Dec 31 11:38:45.204: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:38:45.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bcbsj" for this suite.
Dec 31 11:39:09.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:39:09.471: INFO: namespace: e2e-tests-kubectl-bcbsj, resource: bindings, ignored listing per whitelist
Dec 31 11:39:09.513: INFO: namespace e2e-tests-kubectl-bcbsj deletion completed in 24.302794827s

• [SLOW TEST:35.857 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:39:09.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 31 11:39:09.693: INFO: namespace e2e-tests-kubectl-gj99s
Dec 31 11:39:09.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gj99s'
Dec 31 11:39:10.201: INFO: stderr: ""
Dec 31 11:39:10.201: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 31 11:39:12.181: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:12.181: INFO: Found 0 / 1
Dec 31 11:39:12.976: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:12.976: INFO: Found 0 / 1
Dec 31 11:39:13.219: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:13.220: INFO: Found 0 / 1
Dec 31 11:39:14.226: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:14.226: INFO: Found 0 / 1
Dec 31 11:39:15.229: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:15.229: INFO: Found 0 / 1
Dec 31 11:39:16.225: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:16.225: INFO: Found 0 / 1
Dec 31 11:39:17.800: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:17.800: INFO: Found 0 / 1
Dec 31 11:39:18.605: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:18.605: INFO: Found 0 / 1
Dec 31 11:39:19.245: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:19.245: INFO: Found 0 / 1
Dec 31 11:39:20.218: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:20.218: INFO: Found 0 / 1
Dec 31 11:39:21.244: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:21.245: INFO: Found 1 / 1
Dec 31 11:39:21.245: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 31 11:39:21.298: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:39:21.298: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 31 11:39:21.298: INFO: wait on redis-master startup in e2e-tests-kubectl-gj99s 
Dec 31 11:39:21.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lrtzx redis-master --namespace=e2e-tests-kubectl-gj99s'
Dec 31 11:39:21.467: INFO: stderr: ""
Dec 31 11:39:21.467: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Dec 11:39:19.363 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 11:39:19.363 # Server started, Redis version 3.2.12\n1:M 31 Dec 11:39:19.364 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 11:39:19.364 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 31 11:39:21.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-gj99s'
Dec 31 11:39:21.850: INFO: stderr: ""
Dec 31 11:39:21.850: INFO: stdout: "service/rm2 exposed\n"
Dec 31 11:39:21.894: INFO: Service rm2 in namespace e2e-tests-kubectl-gj99s found.
STEP: exposing service
Dec 31 11:39:23.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-gj99s'
Dec 31 11:39:24.330: INFO: stderr: ""
Dec 31 11:39:24.331: INFO: stdout: "service/rm3 exposed\n"
Dec 31 11:39:24.340: INFO: Service rm3 in namespace e2e-tests-kubectl-gj99s found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:39:26.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gj99s" for this suite.
Dec 31 11:39:52.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:39:52.968: INFO: namespace: e2e-tests-kubectl-gj99s, resource: bindings, ignored listing per whitelist
Dec 31 11:39:53.017: INFO: namespace e2e-tests-kubectl-gj99s deletion completed in 26.63358597s

• [SLOW TEST:43.504 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:39:53.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-42c462f4-2bc2-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:39:53.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-r7h8f" to be "success or failure"
Dec 31 11:39:53.354: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.69931ms
Dec 31 11:39:55.663: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32187622s
Dec 31 11:39:57.683: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342539493s
Dec 31 11:39:59.694: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35311053s
Dec 31 11:40:01.702: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361340652s
Dec 31 11:40:03.719: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377966681s
STEP: Saw pod success
Dec 31 11:40:03.719: INFO: Pod "pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:40:03.723: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 11:40:05.050: INFO: Waiting for pod pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:40:05.082: INFO: Pod pod-configmaps-42c6bd04-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:40:05.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r7h8f" for this suite.
Dec 31 11:40:11.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:40:11.252: INFO: namespace: e2e-tests-configmap-r7h8f, resource: bindings, ignored listing per whitelist
Dec 31 11:40:11.317: INFO: namespace e2e-tests-configmap-r7h8f deletion completed in 6.224749923s

• [SLOW TEST:18.300 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:40:11.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:40:11.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-cfc2q" to be "success or failure"
Dec 31 11:40:11.608: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.476968ms
Dec 31 11:40:14.824: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231224568s
Dec 31 11:40:16.909: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.315393219s
Dec 31 11:40:19.313: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.720289252s
Dec 31 11:40:21.369: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.775684506s
Dec 31 11:40:23.383: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.790308384s
STEP: Saw pod success
Dec 31 11:40:23.383: INFO: Pod "downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:40:23.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:40:23.455: INFO: Waiting for pod downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:40:23.499: INFO: Pod downwardapi-volume-4db85cec-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:40:23.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cfc2q" for this suite.
Dec 31 11:40:30.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:40:30.801: INFO: namespace: e2e-tests-downward-api-cfc2q, resource: bindings, ignored listing per whitelist
Dec 31 11:40:30.962: INFO: namespace e2e-tests-downward-api-cfc2q deletion completed in 7.39313974s

• [SLOW TEST:19.644 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:40:30.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:40:41.379: INFO: Waiting up to 5m0s for pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-pods-qbllj" to be "success or failure"
Dec 31 11:40:41.526: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 146.866789ms
Dec 31 11:40:43.550: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170996365s
Dec 31 11:40:45.566: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186821382s
Dec 31 11:40:47.923: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544137738s
Dec 31 11:40:49.943: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.564339522s
Dec 31 11:40:51.967: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.588027275s
STEP: Saw pod success
Dec 31 11:40:51.967: INFO: Pod "client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:40:51.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 31 11:40:52.834: INFO: Waiting for pod client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:40:53.056: INFO: Pod client-envvars-5f766aeb-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:40:53.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qbllj" for this suite.
Dec 31 11:41:47.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:41:47.321: INFO: namespace: e2e-tests-pods-qbllj, resource: bindings, ignored listing per whitelist
Dec 31 11:41:47.355: INFO: namespace e2e-tests-pods-qbllj deletion completed in 54.284076315s

• [SLOW TEST:76.393 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:41:47.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 31 11:42:07.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:07.939: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:09.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:09.986: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:11.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:11.952: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:13.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:13.958: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:15.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:15.993: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:17.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:17.982: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:19.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:19.961: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:21.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:21.957: INFO: Pod pod-with-prestop-http-hook still exists
Dec 31 11:42:23.939: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 31 11:42:23.965: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:42:24.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fw2b2" for this suite.
Dec 31 11:42:50.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:42:50.721: INFO: namespace: e2e-tests-container-lifecycle-hook-fw2b2, resource: bindings, ignored listing per whitelist
Dec 31 11:42:50.900: INFO: namespace e2e-tests-container-lifecycle-hook-fw2b2 deletion completed in 26.887323386s

• [SLOW TEST:63.544 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:42:50.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 31 11:42:51.085: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 31 11:42:51.096: INFO: Waiting for terminating namespaces to be deleted...
Dec 31 11:42:51.104: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 31 11:42:51.116: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 31 11:42:51.116: INFO: 	Container weave ready: true, restart count 0
Dec 31 11:42:51.116: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 11:42:51.116: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 31 11:42:51.116: INFO: 	Container coredns ready: true, restart count 0
Dec 31 11:42:51.116: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 11:42:51.116: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 11:42:51.116: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 11:42:51.116: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 31 11:42:51.116: INFO: 	Container coredns ready: true, restart count 0
Dec 31 11:42:51.116: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 31 11:42:51.116: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 11:42:51.116: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b2ea4233-2bc2-11ea-a129-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b2ea4233-2bc2-11ea-a129-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b2ea4233-2bc2-11ea-a129-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:43:13.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-v66ss" for this suite.
Dec 31 11:43:25.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:43:26.129: INFO: namespace: e2e-tests-sched-pred-v66ss, resource: bindings, ignored listing per whitelist
Dec 31 11:43:26.141: INFO: namespace e2e-tests-sched-pred-v66ss deletion completed in 12.369969373s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:35.241 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:43:26.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 31 11:43:26.366: INFO: Waiting up to 5m0s for pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-jzdqp" to be "success or failure"
Dec 31 11:43:26.379: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.637015ms
Dec 31 11:43:28.432: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066068366s
Dec 31 11:43:30.450: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084746122s
Dec 31 11:43:32.618: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252697275s
Dec 31 11:43:34.633: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.267065992s
Dec 31 11:43:36.660: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.294010302s
STEP: Saw pod success
Dec 31 11:43:36.660: INFO: Pod "pod-c1d0d756-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:43:36.682: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c1d0d756-2bc2-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:43:37.927: INFO: Waiting for pod pod-c1d0d756-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:43:37.946: INFO: Pod pod-c1d0d756-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:43:37.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jzdqp" for this suite.
Dec 31 11:43:44.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:43:44.134: INFO: namespace: e2e-tests-emptydir-jzdqp, resource: bindings, ignored listing per whitelist
Dec 31 11:43:44.290: INFO: namespace e2e-tests-emptydir-jzdqp deletion completed in 6.327038547s

• [SLOW TEST:18.149 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:43:44.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ccb8079c-2bc2-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:43:44.721: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-v74d2" to be "success or failure"
Dec 31 11:43:44.798: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.976891ms
Dec 31 11:43:47.046: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324789389s
Dec 31 11:43:49.066: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344247373s
Dec 31 11:43:51.270: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548077698s
Dec 31 11:43:53.323: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601540811s
Dec 31 11:43:55.335: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.613504743s
STEP: Saw pod success
Dec 31 11:43:55.335: INFO: Pod "pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:43:55.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 11:43:56.020: INFO: Waiting for pod pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:43:56.425: INFO: Pod pod-projected-configmaps-ccbcebf9-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:43:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v74d2" for this suite.
Dec 31 11:44:02.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:44:02.823: INFO: namespace: e2e-tests-projected-v74d2, resource: bindings, ignored listing per whitelist
Dec 31 11:44:02.965: INFO: namespace e2e-tests-projected-v74d2 deletion completed in 6.515302602s

• [SLOW TEST:18.673 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:44:02.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tzshr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tzshr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 11:44:19.300: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.322: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.331: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.336: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.340: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.344: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.348: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.352: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.356: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.360: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.364: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.368: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.377: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.383: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.391: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.396: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.401: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.417: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.425: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.429: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005)
Dec 31 11:44:19.429: INFO: Lookups using e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tzshr.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 31 11:44:24.881: INFO: DNS probes using e2e-tests-dns-tzshr/dns-test-d7b66e6d-2bc2-11ea-a129-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:44:25.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-tzshr" for this suite.
Dec 31 11:44:33.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:44:33.276: INFO: namespace: e2e-tests-dns-tzshr, resource: bindings, ignored listing per whitelist
Dec 31 11:44:33.378: INFO: namespace e2e-tests-dns-tzshr deletion completed in 8.269841381s

• [SLOW TEST:30.413 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:44:33.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 31 11:44:33.633: INFO: Waiting up to 5m0s for pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-nr8kz" to be "success or failure"
Dec 31 11:44:33.655: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.866281ms
Dec 31 11:44:35.922: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288377674s
Dec 31 11:44:37.944: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310145217s
Dec 31 11:44:40.481: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847774699s
Dec 31 11:44:42.502: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868604066s
Dec 31 11:44:44.527: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.893804752s
STEP: Saw pod success
Dec 31 11:44:44.528: INFO: Pod "pod-e9e48b58-2bc2-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:44:44.543: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e9e48b58-2bc2-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:44:44.679: INFO: Waiting for pod pod-e9e48b58-2bc2-11ea-a129-0242ac110005 to disappear
Dec 31 11:44:44.698: INFO: Pod pod-e9e48b58-2bc2-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:44:44.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nr8kz" for this suite.
Dec 31 11:44:50.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:44:51.427: INFO: namespace: e2e-tests-emptydir-nr8kz, resource: bindings, ignored listing per whitelist
Dec 31 11:44:51.925: INFO: namespace e2e-tests-emptydir-nr8kz deletion completed in 7.218314702s

• [SLOW TEST:18.547 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:44:51.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 31 11:44:52.363: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:45:09.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6ltp7" for this suite.
Dec 31 11:45:15.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:45:15.607: INFO: namespace: e2e-tests-init-container-6ltp7, resource: bindings, ignored listing per whitelist
Dec 31 11:45:15.667: INFO: namespace e2e-tests-init-container-6ltp7 deletion completed in 6.405183547s

• [SLOW TEST:23.741 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:45:15.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0310a201-2bc3-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 11:45:15.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-pdxhr" to be "success or failure"
Dec 31 11:45:15.953: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.383023ms
Dec 31 11:45:18.179: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252245206s
Dec 31 11:45:20.191: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264138288s
Dec 31 11:45:22.626: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.698366242s
Dec 31 11:45:24.640: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712984177s
Dec 31 11:45:26.661: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733860302s
STEP: Saw pod success
Dec 31 11:45:26.662: INFO: Pod "pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:45:26.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 11:45:26.843: INFO: Waiting for pod pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005 to disappear
Dec 31 11:45:26.849: INFO: Pod pod-projected-configmaps-031c5cda-2bc3-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:45:26.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pdxhr" for this suite.
Dec 31 11:45:33.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:45:33.100: INFO: namespace: e2e-tests-projected-pdxhr, resource: bindings, ignored listing per whitelist
Dec 31 11:45:33.165: INFO: namespace e2e-tests-projected-pdxhr deletion completed in 6.306381089s

• [SLOW TEST:17.498 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:45:33.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:46:33.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-dtjcz" for this suite.
Dec 31 11:46:55.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:46:55.740: INFO: namespace: e2e-tests-container-probe-dtjcz, resource: bindings, ignored listing per whitelist
Dec 31 11:46:55.756: INFO: namespace e2e-tests-container-probe-dtjcz deletion completed in 22.259542309s

• [SLOW TEST:82.590 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:46:55.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 11:46:56.300: INFO: Number of nodes with available pods: 0
Dec 31 11:46:56.300: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:46:58.197: INFO: Number of nodes with available pods: 0
Dec 31 11:46:58.197: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:46:58.400: INFO: Number of nodes with available pods: 0
Dec 31 11:46:58.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:46:59.319: INFO: Number of nodes with available pods: 0
Dec 31 11:46:59.319: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:00.369: INFO: Number of nodes with available pods: 0
Dec 31 11:47:00.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:02.768: INFO: Number of nodes with available pods: 0
Dec 31 11:47:02.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:03.762: INFO: Number of nodes with available pods: 0
Dec 31 11:47:03.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:04.356: INFO: Number of nodes with available pods: 0
Dec 31 11:47:04.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:05.332: INFO: Number of nodes with available pods: 0
Dec 31 11:47:05.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 11:47:06.328: INFO: Number of nodes with available pods: 1
Dec 31 11:47:06.328: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 31 11:47:06.706: INFO: Number of nodes with available pods: 1
Dec 31 11:47:06.706: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kpc97, will wait for the garbage collector to delete the pods
Dec 31 11:47:07.808: INFO: Deleting DaemonSet.extensions daemon-set took: 19.215255ms
Dec 31 11:47:08.408: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.429281ms
Dec 31 11:47:13.214: INFO: Number of nodes with available pods: 0
Dec 31 11:47:13.214: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 11:47:13.222: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kpc97/daemonsets","resourceVersion":"16678728"},"items":null}

Dec 31 11:47:13.225: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kpc97/pods","resourceVersion":"16678728"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:47:13.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-kpc97" for this suite.
Dec 31 11:47:19.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:47:19.352: INFO: namespace: e2e-tests-daemonsets-kpc97, resource: bindings, ignored listing per whitelist
Dec 31 11:47:19.582: INFO: namespace e2e-tests-daemonsets-kpc97 deletion completed in 6.344615448s

• [SLOW TEST:23.826 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:47:19.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 31 11:47:19.835: INFO: Waiting up to 5m0s for pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-scd48" to be "success or failure"
Dec 31 11:47:19.903: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.575939ms
Dec 31 11:47:22.137: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301835365s
Dec 31 11:47:24.172: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33691993s
Dec 31 11:47:26.625: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.789965039s
Dec 31 11:47:28.699: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.864138067s
Dec 31 11:47:30.725: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.890022032s
STEP: Saw pod success
Dec 31 11:47:30.725: INFO: Pod "pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:47:30.739: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:47:30.921: INFO: Waiting for pod pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005 to disappear
Dec 31 11:47:30.938: INFO: Pod pod-4cf5ee3f-2bc3-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:47:30.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-scd48" for this suite.
Dec 31 11:47:37.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:47:37.133: INFO: namespace: e2e-tests-emptydir-scd48, resource: bindings, ignored listing per whitelist
Dec 31 11:47:37.246: INFO: namespace e2e-tests-emptydir-scd48 deletion completed in 6.291396811s

• [SLOW TEST:17.664 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:47:37.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 31 11:47:37.490: INFO: Waiting up to 5m0s for pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-tv6s9" to be "success or failure"
Dec 31 11:47:37.514: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.06823ms
Dec 31 11:47:39.813: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322667021s
Dec 31 11:47:41.825: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334868831s
Dec 31 11:47:43.836: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345163457s
Dec 31 11:47:45.847: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356388096s
Dec 31 11:47:48.519: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.028754146s
STEP: Saw pod success
Dec 31 11:47:48.520: INFO: Pod "pod-576ff8bd-2bc3-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:47:48.545: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-576ff8bd-2bc3-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 11:47:48.821: INFO: Waiting for pod pod-576ff8bd-2bc3-11ea-a129-0242ac110005 to disappear
Dec 31 11:47:48.833: INFO: Pod pod-576ff8bd-2bc3-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:47:48.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tv6s9" for this suite.
Dec 31 11:47:54.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:47:54.923: INFO: namespace: e2e-tests-emptydir-tv6s9, resource: bindings, ignored listing per whitelist
Dec 31 11:47:55.050: INFO: namespace e2e-tests-emptydir-tv6s9 deletion completed in 6.211443283s

• [SLOW TEST:17.803 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:47:55.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-sd9fg
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-sd9fg
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-sd9fg
Dec 31 11:47:55.474: INFO: Found 0 stateful pods, waiting for 1
Dec 31 11:48:05.498: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 11:48:15.493: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 31 11:48:15.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 11:48:16.294: INFO: stderr: ""
Dec 31 11:48:16.294: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 11:48:16.294: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 11:48:16.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 11:48:16.408: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 11:48:16.444: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:16.444: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:16.444: INFO: 
Dec 31 11:48:16.444: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 31 11:48:18.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991925356s
Dec 31 11:48:19.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.06695952s
Dec 31 11:48:20.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.616265091s
Dec 31 11:48:21.877: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.586528867s
Dec 31 11:48:24.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.558389165s
Dec 31 11:48:25.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 689.471419ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-sd9fg
Dec 31 11:48:26.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:48:27.548: INFO: stderr: ""
Dec 31 11:48:27.548: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 11:48:27.548: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 11:48:27.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:48:28.013: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 31 11:48:28.013: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 11:48:28.013: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 11:48:28.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:48:28.719: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 31 11:48:28.719: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 11:48:28.720: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 11:48:28.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 11:48:28.733: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 11:48:38.748: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 11:48:38.748: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 11:48:38.748: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 31 11:48:38.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 11:48:39.487: INFO: stderr: ""
Dec 31 11:48:39.487: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 11:48:39.487: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 11:48:39.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 11:48:40.189: INFO: stderr: ""
Dec 31 11:48:40.189: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 11:48:40.189: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 11:48:40.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 11:48:41.070: INFO: stderr: ""
Dec 31 11:48:41.070: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 11:48:41.070: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 11:48:41.070: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 11:48:41.157: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 31 11:48:51.185: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 11:48:51.185: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 11:48:51.185: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 11:48:51.242: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:51.242: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:51.242: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:51.242: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:51.243: INFO: 
Dec 31 11:48:51.243: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:53.531: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:53.532: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:53.532: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:53.532: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:53.532: INFO: 
Dec 31 11:48:53.532: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:54.799: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:54.799: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:54.799: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:54.799: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:54.799: INFO: 
Dec 31 11:48:54.799: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:55.818: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:55.818: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:55.818: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:55.818: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:55.818: INFO: 
Dec 31 11:48:55.818: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:57.102: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:57.102: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:57.102: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:57.102: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:57.102: INFO: 
Dec 31 11:48:57.102: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:58.190: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:58.190: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:58.190: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:58.190: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:58.190: INFO: 
Dec 31 11:48:58.190: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:48:59.218: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:48:59.218: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:48:59.218: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:59.218: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:48:59.218: INFO: 
Dec 31 11:48:59.218: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 31 11:49:00.503: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 31 11:49:00.504: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:47:55 +0000 UTC  }]
Dec 31 11:49:00.504: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:49:00.504: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:48:16 +0000 UTC  }]
Dec 31 11:49:00.504: INFO: 
Dec 31 11:49:00.504: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-sd9fg
Dec 31 11:49:01.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:01.962: INFO: rc: 1
Dec 31 11:49:01.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0012f8ba0 exit status 1   true [0xc000412d38 0xc000412d88 0xc000412dc8] [0xc000412d38 0xc000412d88 0xc000412dc8] [0xc000412d80 0xc000412db0] [0x935700 0x935700] 0xc001ec25a0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 31 11:49:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:12.169: INFO: rc: 1
Dec 31 11:49:12.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ce0870 exit status 1   true [0xc0019a8138 0xc0019a8150 0xc0019a8168] [0xc0019a8138 0xc0019a8150 0xc0019a8168] [0xc0019a8148 0xc0019a8160] [0x935700 0x935700] 0xc0020b15c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:49:22.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:22.289: INFO: rc: 1
Dec 31 11:49:22.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ce09c0 exit status 1   true [0xc0019a8170 0xc0019a8188 0xc0019a81a0] [0xc0019a8170 0xc0019a8188 0xc0019a81a0] [0xc0019a8180 0xc0019a8198] [0x935700 0x935700] 0xc0020b1c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:49:32.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:32.909: INFO: rc: 1
Dec 31 11:49:32.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0012f8cf0 exit status 1   true [0xc000412dd0 0xc000412ee0 0xc000412f10] [0xc000412dd0 0xc000412ee0 0xc000412f10] [0xc000412ea8 0xc000412f08] [0x935700 0x935700] 0xc001ec2840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:49:42.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:43.137: INFO: rc: 1
Dec 31 11:49:43.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bfa3c0 exit status 1   true [0xc002118078 0xc002118090 0xc0021180a8] [0xc002118078 0xc002118090 0xc0021180a8] [0xc002118088 0xc0021180a0] [0x935700 0x935700] 0xc001d88180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:49:53.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:49:53.312: INFO: rc: 1
Dec 31 11:49:53.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ce0b10 exit status 1   true [0xc0019a81a8 0xc0019a81c0 0xc0019a81d8] [0xc0019a81a8 0xc0019a81c0 0xc0019a81d8] [0xc0019a81b8 0xc0019a81d0] [0x935700 0x935700] 0xc001946a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:03.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:03.493: INFO: rc: 1
Dec 31 11:50:03.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000cc48a0 exit status 1   true [0xc00000fb88 0xc00000fbd8 0xc00000fc00] [0xc00000fb88 0xc00000fbd8 0xc00000fc00] [0xc00000fbb8 0xc00000fbf8] [0x935700 0x935700] 0xc001e666c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:13.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:13.745: INFO: rc: 1
Dec 31 11:50:13.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000cc4ba0 exit status 1   true [0xc00000fc08 0xc00000fc68 0xc00000fcb0] [0xc00000fc08 0xc00000fc68 0xc00000fcb0] [0xc00000fc48 0xc00000fca8] [0x935700 0x935700] 0xc001e66960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:23.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:23.960: INFO: rc: 1
Dec 31 11:50:23.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10120 exit status 1   true [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8010 0xc0019a8028] [0x935700 0x935700] 0xc0020b01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:33.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:34.145: INFO: rc: 1
Dec 31 11:50:34.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba120 exit status 1   true [0xc00000e2a8 0xc00000ec58 0xc00000ed78] [0xc00000e2a8 0xc00000ec58 0xc00000ed78] [0xc00000ec48 0xc00000ed10] [0x935700 0x935700] 0xc0020fcb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:44.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:44.267: INFO: rc: 1
Dec 31 11:50:44.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00193e120 exit status 1   true [0xc000412010 0xc0004120d8 0xc000412298] [0xc000412010 0xc0004120d8 0xc000412298] [0xc0004120b0 0xc000412260] [0x935700 0x935700] 0xc001dc6c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:50:54.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:50:54.443: INFO: rc: 1
Dec 31 11:50:54.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10630 exit status 1   true [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8048 0xc0019a8060] [0x935700 0x935700] 0xc0020b0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:04.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:04.683: INFO: rc: 1
Dec 31 11:51:04.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d107e0 exit status 1   true [0xc0019a8070 0xc0019a8088 0xc0019a80a0] [0xc0019a8070 0xc0019a8088 0xc0019a80a0] [0xc0019a8080 0xc0019a8098] [0x935700 0x935700] 0xc0020b0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:14.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:14.879: INFO: rc: 1
Dec 31 11:51:14.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10960 exit status 1   true [0xc0019a80a8 0xc0019a80c0 0xc0019a80d8] [0xc0019a80a8 0xc0019a80c0 0xc0019a80d8] [0xc0019a80b8 0xc0019a80d0] [0x935700 0x935700] 0xc0020b09c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:24.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:25.052: INFO: rc: 1
Dec 31 11:51:25.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba2d0 exit status 1   true [0xc00000ed98 0xc00000ee00 0xc00000ef88] [0xc00000ed98 0xc00000ee00 0xc00000ef88] [0xc00000edd0 0xc00000ef78] [0x935700 0x935700] 0xc0020fcde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:35.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:35.240: INFO: rc: 1
Dec 31 11:51:35.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba420 exit status 1   true [0xc00000eff0 0xc00000f0f0 0xc00000f1f8] [0xc00000eff0 0xc00000f0f0 0xc00000f1f8] [0xc00000f0e8 0xc00000f138] [0x935700 0x935700] 0xc0020fd080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:45.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:45.432: INFO: rc: 1
Dec 31 11:51:45.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10ab0 exit status 1   true [0xc0019a80e0 0xc0019a80f8 0xc0019a8110] [0xc0019a80e0 0xc0019a80f8 0xc0019a8110] [0xc0019a80f0 0xc0019a8108] [0x935700 0x935700] 0xc0020b0c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:51:55.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:51:55.608: INFO: rc: 1
Dec 31 11:51:55.608: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba570 exit status 1   true [0xc00000f258 0xc00000f3d0 0xc00000f4c0] [0xc00000f258 0xc00000f3d0 0xc00000f4c0] [0xc00000f310 0xc00000f410] [0x935700 0x935700] 0xc0020fd320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:05.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:05.734: INFO: rc: 1
Dec 31 11:52:05.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10bd0 exit status 1   true [0xc0019a8118 0xc0019a8130 0xc0019a8148] [0xc0019a8118 0xc0019a8130 0xc0019a8148] [0xc0019a8128 0xc0019a8140] [0x935700 0x935700] 0xc0020b1680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:15.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:15.932: INFO: rc: 1
Dec 31 11:52:15.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d11320 exit status 1   true [0xc0019a8150 0xc0019a8168 0xc0019a8180] [0xc0019a8150 0xc0019a8168 0xc0019a8180] [0xc0019a8160 0xc0019a8178] [0x935700 0x935700] 0xc0020b1e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:25.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:26.174: INFO: rc: 1
Dec 31 11:52:26.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00238c120 exit status 1   true [0xc00000e2a8 0xc00000ec58 0xc00000ed78] [0xc00000e2a8 0xc00000ec58 0xc00000ed78] [0xc00000ec48 0xc00000ed10] [0x935700 0x935700] 0xc0020b01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:36.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:36.344: INFO: rc: 1
Dec 31 11:52:36.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00193e150 exit status 1   true [0xc000412010 0xc0004120d8 0xc000412298] [0xc000412010 0xc0004120d8 0xc000412298] [0xc0004120b0 0xc000412260] [0x935700 0x935700] 0xc0020fcb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:46.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:46.551: INFO: rc: 1
Dec 31 11:52:46.551: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00193e2a0 exit status 1   true [0xc0004122a0 0xc000412378 0xc000412470] [0xc0004122a0 0xc000412378 0xc000412470] [0xc000412310 0xc000412458] [0x935700 0x935700] 0xc0020fcde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:52:56.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:52:56.841: INFO: rc: 1
Dec 31 11:52:56.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d10150 exit status 1   true [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8010 0xc0019a8028] [0x935700 0x935700] 0xc001dc6c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:06.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:07.061: INFO: rc: 1
Dec 31 11:53:07.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00238c270 exit status 1   true [0xc00000ed98 0xc00000ee00 0xc00000ef88] [0xc00000ed98 0xc00000ee00 0xc00000ef88] [0xc00000edd0 0xc00000ef78] [0x935700 0x935700] 0xc0020b0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:17.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:17.260: INFO: rc: 1
Dec 31 11:53:17.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00238c3c0 exit status 1   true [0xc00000eff0 0xc00000f0f0 0xc00000f1f8] [0xc00000eff0 0xc00000f0f0 0xc00000f1f8] [0xc00000f0e8 0xc00000f138] [0x935700 0x935700] 0xc0020b0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:27.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:27.493: INFO: rc: 1
Dec 31 11:53:27.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00193e3c0 exit status 1   true [0xc0004124a8 0xc000412538 0xc000412630] [0xc0004124a8 0xc000412538 0xc000412630] [0xc000412508 0xc000412598] [0x935700 0x935700] 0xc0020fd080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:37.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:37.650: INFO: rc: 1
Dec 31 11:53:37.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00238c540 exit status 1   true [0xc00000f258 0xc00000f3d0 0xc00000f4c0] [0xc00000f258 0xc00000f3d0 0xc00000f4c0] [0xc00000f310 0xc00000f410] [0x935700 0x935700] 0xc0020b09c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:47.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:47.848: INFO: rc: 1
Dec 31 11:53:47.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba150 exit status 1   true [0xc002118000 0xc002118018 0xc002118030] [0xc002118000 0xc002118018 0xc002118030] [0xc002118010 0xc002118028] [0x935700 0x935700] 0xc00254e1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:53:57.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:53:58.128: INFO: rc: 1
Dec 31 11:53:58.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dba2a0 exit status 1   true [0xc002118040 0xc002118058 0xc002118070] [0xc002118040 0xc002118058 0xc002118070] [0xc002118050 0xc002118068] [0x935700 0x935700] 0xc00254e480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 31 11:54:08.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sd9fg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 11:54:08.307: INFO: rc: 1
Dec 31 11:54:08.307: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 31 11:54:08.307: INFO: Scaling statefulset ss to 0
Dec 31 11:54:08.333: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 31 11:54:08.338: INFO: Deleting all statefulset in ns e2e-tests-statefulset-sd9fg
Dec 31 11:54:08.343: INFO: Scaling statefulset ss to 0
Dec 31 11:54:08.359: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 11:54:08.364: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:54:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-sd9fg" for this suite.
Dec 31 11:54:16.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:54:16.813: INFO: namespace: e2e-tests-statefulset-sd9fg, resource: bindings, ignored listing per whitelist
Dec 31 11:54:16.848: INFO: namespace e2e-tests-statefulset-sd9fg deletion completed in 8.418825871s

• [SLOW TEST:381.797 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:54:16.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-c555s
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 11:54:17.295: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 11:54:51.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-c555s PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 11:54:51.601: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 11:54:52.099: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:54:52.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-c555s" for this suite.
Dec 31 11:55:18.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:55:18.395: INFO: namespace: e2e-tests-pod-network-test-c555s, resource: bindings, ignored listing per whitelist
Dec 31 11:55:18.408: INFO: namespace e2e-tests-pod-network-test-c555s deletion completed in 26.281503347s

• [SLOW TEST:61.560 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:55:18.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 11:55:19.035: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:55:20.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-jkkrr" for this suite.
Dec 31 11:55:26.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:55:26.452: INFO: namespace: e2e-tests-custom-resource-definition-jkkrr, resource: bindings, ignored listing per whitelist
Dec 31 11:55:26.588: INFO: namespace e2e-tests-custom-resource-definition-jkkrr deletion completed in 6.334065771s

• [SLOW TEST:8.180 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:55:26.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 31 11:55:26.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5x9b2'
Dec 31 11:55:29.494: INFO: stderr: ""
Dec 31 11:55:29.494: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 31 11:55:30.523: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:30.523: INFO: Found 0 / 1
Dec 31 11:55:31.606: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:31.606: INFO: Found 0 / 1
Dec 31 11:55:32.529: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:32.529: INFO: Found 0 / 1
Dec 31 11:55:33.505: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:33.505: INFO: Found 0 / 1
Dec 31 11:55:34.533: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:34.533: INFO: Found 0 / 1
Dec 31 11:55:35.922: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:35.922: INFO: Found 0 / 1
Dec 31 11:55:36.518: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:36.518: INFO: Found 0 / 1
Dec 31 11:55:37.516: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:37.517: INFO: Found 0 / 1
Dec 31 11:55:38.529: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:38.529: INFO: Found 1 / 1
Dec 31 11:55:38.529: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 31 11:55:38.545: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:38.546: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 31 11:55:38.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jwz59 --namespace=e2e-tests-kubectl-5x9b2 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 31 11:55:38.766: INFO: stderr: ""
Dec 31 11:55:38.766: INFO: stdout: "pod/redis-master-jwz59 patched\n"
STEP: checking annotations
Dec 31 11:55:38.778: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 11:55:38.778: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:55:38.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5x9b2" for this suite.
Dec 31 11:56:04.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:56:05.019: INFO: namespace: e2e-tests-kubectl-5x9b2, resource: bindings, ignored listing per whitelist
Dec 31 11:56:05.030: INFO: namespace e2e-tests-kubectl-5x9b2 deletion completed in 26.242352808s

• [SLOW TEST:38.441 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:56:05.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 31 11:56:05.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 31 11:56:05.377: INFO: stderr: ""
Dec 31 11:56:05.377: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:56:05.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6mn49" for this suite.
Dec 31 11:56:11.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:56:11.578: INFO: namespace: e2e-tests-kubectl-6mn49, resource: bindings, ignored listing per whitelist
Dec 31 11:56:11.592: INFO: namespace e2e-tests-kubectl-6mn49 deletion completed in 6.206181285s

• [SLOW TEST:6.561 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:56:11.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-8a0a41da-2bc4-11ea-a129-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-8a0a41a9-2bc4-11ea-a129-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 31 11:56:11.828: INFO: Waiting up to 5m0s for pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-pxb25" to be "success or failure"
Dec 31 11:56:11.852: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.670885ms
Dec 31 11:56:14.126: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298147214s
Dec 31 11:56:16.140: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312298227s
Dec 31 11:56:18.428: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599705915s
Dec 31 11:56:20.722: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894623676s
Dec 31 11:56:22.752: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.923912883s
STEP: Saw pod success
Dec 31 11:56:22.752: INFO: Pod "projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:56:22.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 31 11:56:23.751: INFO: Waiting for pod projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005 to disappear
Dec 31 11:56:24.170: INFO: Pod projected-volume-8a0a4113-2bc4-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:56:24.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pxb25" for this suite.
Dec 31 11:56:30.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:56:30.786: INFO: namespace: e2e-tests-projected-pxb25, resource: bindings, ignored listing per whitelist
Dec 31 11:56:30.854: INFO: namespace e2e-tests-projected-pxb25 deletion completed in 6.656992543s

• [SLOW TEST:19.262 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:56:30.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:56:31.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-h2g6h" to be "success or failure"
Dec 31 11:56:31.183: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.746387ms
Dec 31 11:56:33.410: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270213034s
Dec 31 11:56:35.425: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285259618s
Dec 31 11:56:37.560: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420223943s
Dec 31 11:56:39.657: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516907059s
Dec 31 11:56:42.524: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.383857081s
STEP: Saw pod success
Dec 31 11:56:42.524: INFO: Pod "downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:56:42.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:56:43.317: INFO: Waiting for pod downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005 to disappear
Dec 31 11:56:43.349: INFO: Pod downwardapi-volume-9591f86e-2bc4-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:56:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h2g6h" for this suite.
Dec 31 11:56:49.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:56:49.464: INFO: namespace: e2e-tests-downward-api-h2g6h, resource: bindings, ignored listing per whitelist
Dec 31 11:56:49.657: INFO: namespace e2e-tests-downward-api-h2g6h deletion completed in 6.296990036s

• [SLOW TEST:18.803 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:56:49.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 31 11:56:49.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:56:50.413: INFO: stderr: ""
Dec 31 11:56:50.413: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 11:56:50.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:56:50.723: INFO: stderr: ""
Dec 31 11:56:50.723: INFO: stdout: "update-demo-nautilus-4rzp6 update-demo-nautilus-8nrcn "
Dec 31 11:56:50.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rzp6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:56:50.976: INFO: stderr: ""
Dec 31 11:56:50.976: INFO: stdout: ""
Dec 31 11:56:50.976: INFO: update-demo-nautilus-4rzp6 is created but not running
Dec 31 11:56:55.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:56:56.234: INFO: stderr: ""
Dec 31 11:56:56.235: INFO: stdout: "update-demo-nautilus-4rzp6 update-demo-nautilus-8nrcn "
Dec 31 11:56:56.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rzp6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:56:56.402: INFO: stderr: ""
Dec 31 11:56:56.402: INFO: stdout: ""
Dec 31 11:56:56.402: INFO: update-demo-nautilus-4rzp6 is created but not running
Dec 31 11:57:01.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:01.574: INFO: stderr: ""
Dec 31 11:57:01.574: INFO: stdout: "update-demo-nautilus-4rzp6 update-demo-nautilus-8nrcn "
Dec 31 11:57:01.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rzp6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:01.727: INFO: stderr: ""
Dec 31 11:57:01.727: INFO: stdout: ""
Dec 31 11:57:01.727: INFO: update-demo-nautilus-4rzp6 is created but not running
Dec 31 11:57:06.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:06.936: INFO: stderr: ""
Dec 31 11:57:06.937: INFO: stdout: "update-demo-nautilus-4rzp6 update-demo-nautilus-8nrcn "
Dec 31 11:57:06.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rzp6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.143: INFO: stderr: ""
Dec 31 11:57:07.144: INFO: stdout: "true"
Dec 31 11:57:07.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rzp6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.253: INFO: stderr: ""
Dec 31 11:57:07.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 11:57:07.254: INFO: validating pod update-demo-nautilus-4rzp6
Dec 31 11:57:07.283: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 11:57:07.283: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 11:57:07.283: INFO: update-demo-nautilus-4rzp6 is verified up and running
Dec 31 11:57:07.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nrcn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.437: INFO: stderr: ""
Dec 31 11:57:07.438: INFO: stdout: "true"
Dec 31 11:57:07.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nrcn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.555: INFO: stderr: ""
Dec 31 11:57:07.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 11:57:07.555: INFO: validating pod update-demo-nautilus-8nrcn
Dec 31 11:57:07.566: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 11:57:07.566: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 11:57:07.567: INFO: update-demo-nautilus-8nrcn is verified up and running
STEP: using delete to clean up resources
Dec 31 11:57:07.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 11:57:07.727: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 31 11:57:07.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z6mpm'
Dec 31 11:57:07.910: INFO: stderr: "No resources found.\n"
Dec 31 11:57:07.911: INFO: stdout: ""
Dec 31 11:57:07.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z6mpm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 31 11:57:08.264: INFO: stderr: ""
Dec 31 11:57:08.264: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:57:08.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z6mpm" for this suite.
Dec 31 11:57:32.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:57:32.580: INFO: namespace: e2e-tests-kubectl-z6mpm, resource: bindings, ignored listing per whitelist
Dec 31 11:57:32.605: INFO: namespace e2e-tests-kubectl-z6mpm deletion completed in 24.315398746s

• [SLOW TEST:42.947 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:57:32.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 31 11:57:43.609: INFO: Successfully updated pod "annotationupdateba730036-2bc4-11ea-a129-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:57:45.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z9vgm" for this suite.
Dec 31 11:58:09.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:58:09.916: INFO: namespace: e2e-tests-projected-z9vgm, resource: bindings, ignored listing per whitelist
Dec 31 11:58:10.173: INFO: namespace e2e-tests-projected-z9vgm deletion completed in 24.414584603s

• [SLOW TEST:37.567 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:58:10.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 11:58:10.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-qm7jp" to be "success or failure"
Dec 31 11:58:10.481: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.36282ms
Dec 31 11:58:12.843: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401980153s
Dec 31 11:58:14.858: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416787884s
Dec 31 11:58:17.690: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.249147613s
Dec 31 11:58:19.716: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.275123438s
Dec 31 11:58:21.782: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.340834705s
Dec 31 11:58:24.369: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.927871494s
STEP: Saw pod success
Dec 31 11:58:24.369: INFO: Pod "downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 11:58:24.395: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 11:58:25.165: INFO: Waiting for pod downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005 to disappear
Dec 31 11:58:25.185: INFO: Pod downwardapi-volume-d0b72d6b-2bc4-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:58:25.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qm7jp" for this suite.
Dec 31 11:58:31.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:58:31.402: INFO: namespace: e2e-tests-projected-qm7jp, resource: bindings, ignored listing per whitelist
Dec 31 11:58:31.620: INFO: namespace e2e-tests-projected-qm7jp deletion completed in 6.414533468s

• [SLOW TEST:21.447 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:58:31.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-mn6l
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 11:58:32.317: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mn6l" in namespace "e2e-tests-subpath-j5kms" to be "success or failure"
Dec 31 11:58:32.353: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 35.649068ms
Dec 31 11:58:34.362: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045259387s
Dec 31 11:58:36.382: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064791004s
Dec 31 11:58:38.539: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221699857s
Dec 31 11:58:40.564: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24676337s
Dec 31 11:58:42.580: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.26311163s
Dec 31 11:58:44.642: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.324849502s
Dec 31 11:58:46.667: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.35028404s
Dec 31 11:58:48.708: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.391193161s
Dec 31 11:58:50.726: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 18.408546836s
Dec 31 11:58:52.744: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 20.427325551s
Dec 31 11:58:54.813: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 22.49597384s
Dec 31 11:58:56.831: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 24.513507428s
Dec 31 11:58:58.848: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 26.530699329s
Dec 31 11:59:00.868: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 28.550658557s
Dec 31 11:59:02.888: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 30.571157659s
Dec 31 11:59:04.896: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 32.578732145s
Dec 31 11:59:06.913: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 34.596423183s
Dec 31 11:59:08.930: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Running", Reason="", readiness=false. Elapsed: 36.613364746s
Dec 31 11:59:10.954: INFO: Pod "pod-subpath-test-downwardapi-mn6l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.637032215s
STEP: Saw pod success
Dec 31 11:59:10.954: INFO: Pod "pod-subpath-test-downwardapi-mn6l" satisfied condition "success or failure"
Dec 31 11:59:10.962: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-mn6l container test-container-subpath-downwardapi-mn6l: 
STEP: delete the pod
Dec 31 11:59:12.420: INFO: Waiting for pod pod-subpath-test-downwardapi-mn6l to disappear
Dec 31 11:59:12.438: INFO: Pod pod-subpath-test-downwardapi-mn6l no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-mn6l
Dec 31 11:59:12.438: INFO: Deleting pod "pod-subpath-test-downwardapi-mn6l" in namespace "e2e-tests-subpath-j5kms"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 11:59:12.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-j5kms" for this suite.
Dec 31 11:59:18.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 11:59:18.718: INFO: namespace: e2e-tests-subpath-j5kms, resource: bindings, ignored listing per whitelist
Dec 31 11:59:18.778: INFO: namespace e2e-tests-subpath-j5kms deletion completed in 6.317425185s

• [SLOW TEST:47.158 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 11:59:18.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ln5p6
Dec 31 11:59:29.019: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ln5p6
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 11:59:29.025: INFO: Initial restart count of pod liveness-exec is 0
Dec 31 12:00:24.329: INFO: Restart count of pod e2e-tests-container-probe-ln5p6/liveness-exec is now 1 (55.304485271s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:00:24.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ln5p6" for this suite.
Dec 31 12:00:30.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:00:30.669: INFO: namespace: e2e-tests-container-probe-ln5p6, resource: bindings, ignored listing per whitelist
Dec 31 12:00:30.714: INFO: namespace e2e-tests-container-probe-ln5p6 deletion completed in 6.322046433s

• [SLOW TEST:71.934 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:00:30.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-248dba70-2bc5-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:00:31.027: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-zpbd8" to be "success or failure"
Dec 31 12:00:31.043: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.58205ms
Dec 31 12:00:33.494: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467452503s
Dec 31 12:00:35.577: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.550693536s
Dec 31 12:00:37.680: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653251872s
Dec 31 12:00:39.768: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741018636s
Dec 31 12:00:41.783: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.756672613s
Dec 31 12:00:43.919: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.892397355s
STEP: Saw pod success
Dec 31 12:00:43.919: INFO: Pod "pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:00:44.252: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 12:00:44.401: INFO: Waiting for pod pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005 to disappear
Dec 31 12:00:44.412: INFO: Pod pod-projected-secrets-248ef19b-2bc5-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:00:44.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zpbd8" for this suite.
Dec 31 12:00:50.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:00:50.664: INFO: namespace: e2e-tests-projected-zpbd8, resource: bindings, ignored listing per whitelist
Dec 31 12:00:50.731: INFO: namespace e2e-tests-projected-zpbd8 deletion completed in 6.298744173s

• [SLOW TEST:20.017 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:00:50.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 31 12:01:19.042: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:19.042: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:19.633: INFO: Exec stderr: ""
Dec 31 12:01:19.633: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:19.634: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:20.163: INFO: Exec stderr: ""
Dec 31 12:01:20.163: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:20.163: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:20.775: INFO: Exec stderr: ""
Dec 31 12:01:20.776: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:20.776: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:21.159: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 31 12:01:21.159: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:21.160: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:21.500: INFO: Exec stderr: ""
Dec 31 12:01:21.500: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:21.500: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:21.813: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 31 12:01:21.813: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:21.813: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:22.145: INFO: Exec stderr: ""
Dec 31 12:01:22.145: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:22.145: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:22.512: INFO: Exec stderr: ""
Dec 31 12:01:22.512: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:22.512: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:23.168: INFO: Exec stderr: ""
Dec 31 12:01:23.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6nkcr PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:01:23.168: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:01:23.505: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:01:23.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-6nkcr" for this suite.
Dec 31 12:02:19.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:02:19.890: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-6nkcr, resource: bindings, ignored listing per whitelist
Dec 31 12:02:19.920: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-6nkcr deletion completed in 56.401771051s

• [SLOW TEST:89.189 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:02:19.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-65a05a43-2bc5-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:02:20.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-7mvlk" to be "success or failure"
Dec 31 12:02:20.299: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 84.607832ms
Dec 31 12:02:23.361: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146931296s
Dec 31 12:02:25.481: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267083131s
Dec 31 12:02:27.500: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.286124197s
Dec 31 12:02:29.681: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.467043809s
Dec 31 12:02:31.746: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.532333862s
STEP: Saw pod success
Dec 31 12:02:31.747: INFO: Pod "pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:02:31.758: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 12:02:31.893: INFO: Waiting for pod pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005 to disappear
Dec 31 12:02:31.919: INFO: Pod pod-projected-secrets-65a0f88d-2bc5-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:02:31.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7mvlk" for this suite.
Dec 31 12:02:37.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:02:38.060: INFO: namespace: e2e-tests-projected-7mvlk, resource: bindings, ignored listing per whitelist
Dec 31 12:02:38.172: INFO: namespace e2e-tests-projected-7mvlk deletion completed in 6.24742785s

• [SLOW TEST:18.251 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:02:38.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:02:48.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tsd9c" for this suite.
Dec 31 12:02:55.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:02:55.353: INFO: namespace: e2e-tests-emptydir-wrapper-tsd9c, resource: bindings, ignored listing per whitelist
Dec 31 12:02:55.362: INFO: namespace e2e-tests-emptydir-wrapper-tsd9c deletion completed in 6.369937213s

• [SLOW TEST:17.190 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:02:55.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005
Dec 31 12:02:55.533: INFO: Pod name my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005: Found 0 pods out of 1
Dec 31 12:03:00.557: INFO: Pod name my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005: Found 1 pods out of 1
Dec 31 12:03:00.557: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005" are running
Dec 31 12:03:04.608: INFO: Pod "my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005-h9g5l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 12:02:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 12:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 12:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 12:02:55 +0000 UTC Reason: Message:}])
Dec 31 12:03:04.609: INFO: Trying to dial the pod
Dec 31 12:03:09.659: INFO: Controller my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005: Got expected result from replica 1 [my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005-h9g5l]: "my-hostname-basic-7aa83f76-2bc5-11ea-a129-0242ac110005-h9g5l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:03:09.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-jwbxm" for this suite.
Dec 31 12:03:15.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:03:15.881: INFO: namespace: e2e-tests-replication-controller-jwbxm, resource: bindings, ignored listing per whitelist
Dec 31 12:03:15.909: INFO: namespace e2e-tests-replication-controller-jwbxm deletion completed in 6.243789083s

• [SLOW TEST:20.547 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:03:15.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 12:03:16.624: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"872cbd6d-2bc5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0021e0ae2), BlockOwnerDeletion:(*bool)(0xc0021e0ae3)}}
Dec 31 12:03:16.658: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"87132ea6-2bc5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0015238c2), BlockOwnerDeletion:(*bool)(0xc0015238c3)}}
Dec 31 12:03:16.900: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"87277d5e-2bc5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0021e0cb2), BlockOwnerDeletion:(*bool)(0xc0021e0cb3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:03:22.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kjjn8" for this suite.
Dec 31 12:03:30.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:03:30.470: INFO: namespace: e2e-tests-gc-kjjn8, resource: bindings, ignored listing per whitelist
Dec 31 12:03:30.509: INFO: namespace e2e-tests-gc-kjjn8 deletion completed in 8.339206436s

• [SLOW TEST:14.599 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:03:30.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 31 12:03:30.721: INFO: Waiting up to 5m0s for pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-ldftt" to be "success or failure"
Dec 31 12:03:30.729: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116411ms
Dec 31 12:03:32.772: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051421184s
Dec 31 12:03:34.790: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068697601s
Dec 31 12:03:36.824: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103595173s
Dec 31 12:03:38.836: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114857311s
Dec 31 12:03:40.874: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153247067s
STEP: Saw pod success
Dec 31 12:03:40.874: INFO: Pod "downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:03:40.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 12:03:41.026: INFO: Waiting for pod downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005 to disappear
Dec 31 12:03:41.047: INFO: Pod downward-api-8fa572f4-2bc5-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:03:41.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ldftt" for this suite.
Dec 31 12:03:47.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:03:47.242: INFO: namespace: e2e-tests-downward-api-ldftt, resource: bindings, ignored listing per whitelist
Dec 31 12:03:47.258: INFO: namespace e2e-tests-downward-api-ldftt deletion completed in 6.202427308s

• [SLOW TEST:16.748 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:03:47.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:03:47.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-zpqxz" to be "success or failure"
Dec 31 12:03:47.485: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.094514ms
Dec 31 12:03:49.612: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161882752s
Dec 31 12:03:51.629: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178426092s
Dec 31 12:03:53.658: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2082765s
Dec 31 12:03:55.673: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222643477s
Dec 31 12:03:57.691: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.240684124s
STEP: Saw pod success
Dec 31 12:03:57.691: INFO: Pod "downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:03:57.697: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:03:58.939: INFO: Waiting for pod downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005 to disappear
Dec 31 12:03:59.372: INFO: Pod downwardapi-volume-99a357e0-2bc5-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:03:59.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zpqxz" for this suite.
Dec 31 12:04:05.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:04:05.582: INFO: namespace: e2e-tests-projected-zpqxz, resource: bindings, ignored listing per whitelist
Dec 31 12:04:05.621: INFO: namespace e2e-tests-projected-zpqxz deletion completed in 6.224752184s

• [SLOW TEST:18.363 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:04:05.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a4987a51-2bc5-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:04:05.862: INFO: Waiting up to 5m0s for pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-5m6sz" to be "success or failure"
Dec 31 12:04:05.885: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.020719ms
Dec 31 12:04:08.270: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407480357s
Dec 31 12:04:10.296: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433195616s
Dec 31 12:04:12.315: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452400613s
Dec 31 12:04:14.332: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469893259s
Dec 31 12:04:16.360: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.497553781s
STEP: Saw pod success
Dec 31 12:04:16.360: INFO: Pod "pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:04:16.373: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 12:04:16.436: INFO: Waiting for pod pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005 to disappear
Dec 31 12:04:16.451: INFO: Pod pod-configmaps-a499a55b-2bc5-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:04:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5m6sz" for this suite.
Dec 31 12:04:24.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:04:24.569: INFO: namespace: e2e-tests-configmap-5m6sz, resource: bindings, ignored listing per whitelist
Dec 31 12:04:24.845: INFO: namespace e2e-tests-configmap-5m6sz deletion completed in 8.385749082s

• [SLOW TEST:19.225 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:04:24.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 12:04:25.266: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 31 12:04:30.919: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 31 12:04:36.969: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 31 12:04:38.984: INFO: Creating deployment "test-rollover-deployment"
Dec 31 12:04:39.114: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 31 12:04:41.150: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 31 12:04:41.181: INFO: Ensure that both replica sets have 1 created replica
Dec 31 12:04:41.207: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 31 12:04:41.261: INFO: Updating deployment test-rollover-deployment
Dec 31 12:04:41.261: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 31 12:04:43.626: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 31 12:04:43.639: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 31 12:04:43.649: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:43.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:45.678: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:45.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:47.700: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:47.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:50.140: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:50.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:51.730: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:51.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:53.678: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:53.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:55.678: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:55.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:57.678: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:57.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:04:59.669: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:04:59.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:05:01.674: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:05:01.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:05:03.748: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 12:05:03.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713390679, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 12:05:05.675: INFO: 
Dec 31 12:05:05.675: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 31 12:05:05.695: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2qc5r,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2qc5r/deployments/test-rollover-deployment,UID:b85cd4ad-2bc5-11ea-a994-fa163e34d433,ResourceVersion:16680901,Generation:2,CreationTimestamp:2019-12-31 12:04:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-31 12:04:39 +0000 UTC 2019-12-31 12:04:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-31 12:05:04 +0000 UTC 2019-12-31 12:04:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 31 12:05:05.702: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2qc5r,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2qc5r/replicasets/test-rollover-deployment-5b8479fdb6,UID:b9b6cd6e-2bc5-11ea-a994-fa163e34d433,ResourceVersion:16680890,Generation:2,CreationTimestamp:2019-12-31 12:04:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b85cd4ad-2bc5-11ea-a994-fa163e34d433 0xc0024f32a7 0xc0024f32a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 31 12:05:05.702: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 31 12:05:05.703: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2qc5r,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2qc5r/replicasets/test-rollover-controller,UID:b02a6aef-2bc5-11ea-a994-fa163e34d433,ResourceVersion:16680899,Generation:2,CreationTimestamp:2019-12-31 12:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b85cd4ad-2bc5-11ea-a994-fa163e34d433 0xc0024f3117 0xc0024f3118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 12:05:05.703: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2qc5r,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2qc5r/replicasets/test-rollover-deployment-58494b7559,UID:b875dc88-2bc5-11ea-a994-fa163e34d433,ResourceVersion:16680856,Generation:2,CreationTimestamp:2019-12-31 12:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b85cd4ad-2bc5-11ea-a994-fa163e34d433 0xc0024f31d7 0xc0024f31d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 12:05:05.711: INFO: Pod "test-rollover-deployment-5b8479fdb6-bbcmz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-bbcmz,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2qc5r,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2qc5r/pods/test-rollover-deployment-5b8479fdb6-bbcmz,UID:ba119ba6-2bc5-11ea-a994-fa163e34d433,ResourceVersion:16680875,Generation:0,CreationTimestamp:2019-12-31 12:04:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 b9b6cd6e-2bc5-11ea-a994-fa163e34d433 0xc0024f3e47 0xc0024f3e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pvvfh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pvvfh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pvvfh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024f3eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024f3ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:04:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:04:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:04:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:04:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-31 12:04:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-31 12:04:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://479f5bdc27624504b56a6a495bbd1709a133d0c2a0100c0cf8f17b14b3ac8ff3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:05:05.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2qc5r" for this suite.
Dec 31 12:05:14.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:05:14.328: INFO: namespace: e2e-tests-deployment-2qc5r, resource: bindings, ignored listing per whitelist
Dec 31 12:05:14.341: INFO: namespace e2e-tests-deployment-2qc5r deletion completed in 8.619980502s

• [SLOW TEST:49.494 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:05:14.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 31 12:05:15.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:15.689: INFO: stderr: ""
Dec 31 12:05:15.689: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 12:05:15.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:15.886: INFO: stderr: ""
Dec 31 12:05:15.886: INFO: stdout: "update-demo-nautilus-fbzd6 "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 31 12:05:20.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:21.103: INFO: stderr: ""
Dec 31 12:05:21.103: INFO: stdout: "update-demo-nautilus-fbzd6 update-demo-nautilus-qttjr "
Dec 31 12:05:21.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbzd6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:21.285: INFO: stderr: ""
Dec 31 12:05:21.285: INFO: stdout: ""
Dec 31 12:05:21.285: INFO: update-demo-nautilus-fbzd6 is created but not running
Dec 31 12:05:26.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:26.657: INFO: stderr: ""
Dec 31 12:05:26.657: INFO: stdout: "update-demo-nautilus-fbzd6 update-demo-nautilus-qttjr "
Dec 31 12:05:26.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbzd6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:27.014: INFO: stderr: ""
Dec 31 12:05:27.014: INFO: stdout: ""
Dec 31 12:05:27.014: INFO: update-demo-nautilus-fbzd6 is created but not running
Dec 31 12:05:32.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:33.958: INFO: stderr: ""
Dec 31 12:05:33.958: INFO: stdout: "update-demo-nautilus-fbzd6 update-demo-nautilus-qttjr "
Dec 31 12:05:33.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbzd6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:34.205: INFO: stderr: ""
Dec 31 12:05:34.205: INFO: stdout: "true"
Dec 31 12:05:34.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbzd6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:34.330: INFO: stderr: ""
Dec 31 12:05:34.330: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:05:34.330: INFO: validating pod update-demo-nautilus-fbzd6
Dec 31 12:05:34.342: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:05:34.342: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:05:34.342: INFO: update-demo-nautilus-fbzd6 is verified up and running
Dec 31 12:05:34.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qttjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:34.511: INFO: stderr: ""
Dec 31 12:05:34.511: INFO: stdout: "true"
Dec 31 12:05:34.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qttjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:05:34.640: INFO: stderr: ""
Dec 31 12:05:34.640: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:05:34.640: INFO: validating pod update-demo-nautilus-qttjr
Dec 31 12:05:34.649: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:05:34.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:05:34.649: INFO: update-demo-nautilus-qttjr is verified up and running
STEP: rolling-update to new replication controller
Dec 31 12:05:34.652: INFO: scanned /root for discovery docs: 
Dec 31 12:05:34.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:12.612: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 31 12:06:12.612: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 12:06:12.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:12.806: INFO: stderr: ""
Dec 31 12:06:12.807: INFO: stdout: "update-demo-kitten-fws9r update-demo-kitten-gbqq8 "
Dec 31 12:06:12.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fws9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:12.973: INFO: stderr: ""
Dec 31 12:06:12.973: INFO: stdout: "true"
Dec 31 12:06:12.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fws9r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:13.210: INFO: stderr: ""
Dec 31 12:06:13.210: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 31 12:06:13.210: INFO: validating pod update-demo-kitten-fws9r
Dec 31 12:06:13.242: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 31 12:06:13.242: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 31 12:06:13.242: INFO: update-demo-kitten-fws9r is verified up and running
Dec 31 12:06:13.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gbqq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:13.384: INFO: stderr: ""
Dec 31 12:06:13.384: INFO: stdout: "true"
Dec 31 12:06:13.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gbqq8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bm55k'
Dec 31 12:06:13.561: INFO: stderr: ""
Dec 31 12:06:13.561: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 31 12:06:13.561: INFO: validating pod update-demo-kitten-gbqq8
Dec 31 12:06:13.574: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 31 12:06:13.574: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 31 12:06:13.574: INFO: update-demo-kitten-gbqq8 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:06:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bm55k" for this suite.
Dec 31 12:06:41.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:06:41.736: INFO: namespace: e2e-tests-kubectl-bm55k, resource: bindings, ignored listing per whitelist
Dec 31 12:06:41.779: INFO: namespace e2e-tests-kubectl-bm55k deletion completed in 28.200441077s

• [SLOW TEST:87.437 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:06:41.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:06:42.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-d4kcd" for this suite.
Dec 31 12:06:48.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:06:48.230: INFO: namespace: e2e-tests-services-d4kcd, resource: bindings, ignored listing per whitelist
Dec 31 12:06:48.422: INFO: namespace e2e-tests-services-d4kcd deletion completed in 6.403070964s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.643 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:06:48.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-cfp7m/configmap-test-05b5a332-2bc6-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:06:48.780: INFO: Waiting up to 5m0s for pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-cfp7m" to be "success or failure"
Dec 31 12:06:48.799: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.902912ms
Dec 31 12:06:50.827: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046452591s
Dec 31 12:06:52.847: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066402047s
Dec 31 12:06:55.338: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557102743s
Dec 31 12:06:57.346: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566005623s
Dec 31 12:06:59.366: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.585873508s
STEP: Saw pod success
Dec 31 12:06:59.366: INFO: Pod "pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:06:59.376: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005 container env-test: 
STEP: delete the pod
Dec 31 12:07:00.843: INFO: Waiting for pod pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:07:00.883: INFO: Pod pod-configmaps-05b6ba98-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:07:00.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cfp7m" for this suite.
Dec 31 12:07:07.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:07:07.319: INFO: namespace: e2e-tests-configmap-cfp7m, resource: bindings, ignored listing per whitelist
Dec 31 12:07:07.328: INFO: namespace e2e-tests-configmap-cfp7m deletion completed in 6.241342747s

• [SLOW TEST:18.904 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:07:07.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 31 12:07:07.630: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681238,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 12:07:07.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681239,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 31 12:07:07.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681240,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 31 12:07:17.741: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681254,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 12:07:17.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681255,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 31 12:07:17.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jn748,SelfLink:/api/v1/namespaces/e2e-tests-watch-jn748/configmaps/e2e-watch-test-label-changed,UID:10e557ee-2bc6-11ea-a994-fa163e34d433,ResourceVersion:16681256,Generation:0,CreationTimestamp:2019-12-31 12:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:07:17.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-jn748" for this suite.
Dec 31 12:07:23.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:07:24.113: INFO: namespace: e2e-tests-watch-jn748, resource: bindings, ignored listing per whitelist
Dec 31 12:07:24.115: INFO: namespace e2e-tests-watch-jn748 deletion completed in 6.36524336s

• [SLOW TEST:16.787 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:07:24.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:07:24.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-56nv6" to be "success or failure"
Dec 31 12:07:24.333: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.358521ms
Dec 31 12:07:26.742: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421056899s
Dec 31 12:07:28.778: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456982841s
Dec 31 12:07:31.687: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.365920409s
Dec 31 12:07:33.841: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.520248208s
Dec 31 12:07:35.992: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.670319808s
STEP: Saw pod success
Dec 31 12:07:35.992: INFO: Pod "downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:07:36.006: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:07:36.320: INFO: Waiting for pod downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:07:36.328: INFO: Pod downwardapi-volume-1ae769b8-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:07:36.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-56nv6" for this suite.
Dec 31 12:07:42.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:07:42.806: INFO: namespace: e2e-tests-projected-56nv6, resource: bindings, ignored listing per whitelist
Dec 31 12:07:42.843: INFO: namespace e2e-tests-projected-56nv6 deletion completed in 6.506780649s

• [SLOW TEST:18.728 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:07:42.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-260cfe44-2bc6-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:07:43.037: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-krcxc" to be "success or failure"
Dec 31 12:07:43.069: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.393598ms
Dec 31 12:07:45.084: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046539586s
Dec 31 12:07:47.111: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07354734s
Dec 31 12:07:49.557: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51975252s
Dec 31 12:07:52.099: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.061567256s
Dec 31 12:07:54.189: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.151396691s
STEP: Saw pod success
Dec 31 12:07:54.189: INFO: Pod "pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:07:54.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 12:07:54.772: INFO: Waiting for pod pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:07:54.838: INFO: Pod pod-projected-secrets-260e36c6-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:07:54.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-krcxc" for this suite.
Dec 31 12:08:01.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:08:01.398: INFO: namespace: e2e-tests-projected-krcxc, resource: bindings, ignored listing per whitelist
Dec 31 12:08:01.435: INFO: namespace e2e-tests-projected-krcxc deletion completed in 6.423139307s

• [SLOW TEST:18.591 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:08:01.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-312ad6c6-2bc6-11ea-a129-0242ac110005
STEP: Creating secret with name s-test-opt-upd-312ad788-2bc6-11ea-a129-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-312ad6c6-2bc6-11ea-a129-0242ac110005
STEP: Updating secret s-test-opt-upd-312ad788-2bc6-11ea-a129-0242ac110005
STEP: Creating secret with name s-test-opt-create-312ad7ac-2bc6-11ea-a129-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:09:26.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7h7r8" for this suite.
Dec 31 12:09:50.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:09:50.830: INFO: namespace: e2e-tests-projected-7h7r8, resource: bindings, ignored listing per whitelist
Dec 31 12:09:50.872: INFO: namespace e2e-tests-projected-7h7r8 deletion completed in 24.313139161s

• [SLOW TEST:109.437 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:09:50.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-725a599b-2bc6-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:09:51.177: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-pz5rt" to be "success or failure"
Dec 31 12:09:51.198: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.288557ms
Dec 31 12:09:53.562: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384347368s
Dec 31 12:09:55.572: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394715044s
Dec 31 12:09:58.173: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.995798711s
Dec 31 12:10:00.188: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.010382288s
Dec 31 12:10:02.209: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.031140278s
STEP: Saw pod success
Dec 31 12:10:02.209: INFO: Pod "pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:10:02.213: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 12:10:02.983: INFO: Waiting for pod pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:10:03.582: INFO: Pod pod-projected-configmaps-726e698a-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:10:03.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pz5rt" for this suite.
Dec 31 12:10:09.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:10:09.714: INFO: namespace: e2e-tests-projected-pz5rt, resource: bindings, ignored listing per whitelist
Dec 31 12:10:09.937: INFO: namespace e2e-tests-projected-pz5rt deletion completed in 6.340655203s

• [SLOW TEST:19.065 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:10:09.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 12:10:10.400: INFO: Number of nodes with available pods: 0
Dec 31 12:10:10.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:12.128: INFO: Number of nodes with available pods: 0
Dec 31 12:10:12.128: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:12.660: INFO: Number of nodes with available pods: 0
Dec 31 12:10:12.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:13.414: INFO: Number of nodes with available pods: 0
Dec 31 12:10:13.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:14.422: INFO: Number of nodes with available pods: 0
Dec 31 12:10:14.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:16.027: INFO: Number of nodes with available pods: 0
Dec 31 12:10:16.027: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:16.495: INFO: Number of nodes with available pods: 0
Dec 31 12:10:16.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:17.431: INFO: Number of nodes with available pods: 0
Dec 31 12:10:17.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:18.425: INFO: Number of nodes with available pods: 0
Dec 31 12:10:18.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:19.417: INFO: Number of nodes with available pods: 1
Dec 31 12:10:19.417: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 31 12:10:19.468: INFO: Number of nodes with available pods: 0
Dec 31 12:10:19.468: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:20.500: INFO: Number of nodes with available pods: 0
Dec 31 12:10:20.500: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:21.515: INFO: Number of nodes with available pods: 0
Dec 31 12:10:21.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:22.539: INFO: Number of nodes with available pods: 0
Dec 31 12:10:22.539: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:23.508: INFO: Number of nodes with available pods: 0
Dec 31 12:10:23.508: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:24.987: INFO: Number of nodes with available pods: 0
Dec 31 12:10:24.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:25.501: INFO: Number of nodes with available pods: 0
Dec 31 12:10:25.501: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:26.494: INFO: Number of nodes with available pods: 0
Dec 31 12:10:26.494: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:27.496: INFO: Number of nodes with available pods: 0
Dec 31 12:10:27.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:28.523: INFO: Number of nodes with available pods: 0
Dec 31 12:10:28.523: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:29.489: INFO: Number of nodes with available pods: 0
Dec 31 12:10:29.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:30.503: INFO: Number of nodes with available pods: 0
Dec 31 12:10:30.503: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:31.491: INFO: Number of nodes with available pods: 0
Dec 31 12:10:31.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:32.523: INFO: Number of nodes with available pods: 0
Dec 31 12:10:32.523: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:33.500: INFO: Number of nodes with available pods: 0
Dec 31 12:10:33.500: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:34.851: INFO: Number of nodes with available pods: 0
Dec 31 12:10:34.851: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:35.660: INFO: Number of nodes with available pods: 0
Dec 31 12:10:35.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:36.526: INFO: Number of nodes with available pods: 0
Dec 31 12:10:36.526: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:37.481: INFO: Number of nodes with available pods: 0
Dec 31 12:10:37.481: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:39.765: INFO: Number of nodes with available pods: 0
Dec 31 12:10:39.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:40.680: INFO: Number of nodes with available pods: 0
Dec 31 12:10:40.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:41.497: INFO: Number of nodes with available pods: 0
Dec 31 12:10:41.497: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:42.499: INFO: Number of nodes with available pods: 0
Dec 31 12:10:42.499: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:10:43.495: INFO: Number of nodes with available pods: 1
Dec 31 12:10:43.495: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qnvnk, will wait for the garbage collector to delete the pods
Dec 31 12:10:43.589: INFO: Deleting DaemonSet.extensions daemon-set took: 25.160977ms
Dec 31 12:10:43.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.631181ms
Dec 31 12:10:51.202: INFO: Number of nodes with available pods: 0
Dec 31 12:10:51.202: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 12:10:51.213: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qnvnk/daemonsets","resourceVersion":"16681670"},"items":null}

Dec 31 12:10:51.221: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qnvnk/pods","resourceVersion":"16681670"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:10:51.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-qnvnk" for this suite.
Dec 31 12:10:59.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:10:59.442: INFO: namespace: e2e-tests-daemonsets-qnvnk, resource: bindings, ignored listing per whitelist
Dec 31 12:10:59.531: INFO: namespace e2e-tests-daemonsets-qnvnk deletion completed in 8.279570066s

• [SLOW TEST:49.593 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:10:59.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 31 12:10:59.814: INFO: Waiting up to 5m0s for pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-var-expansion-rdkg8" to be "success or failure"
Dec 31 12:10:59.845: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.206546ms
Dec 31 12:11:01.864: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049686783s
Dec 31 12:11:03.890: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0760805s
Dec 31 12:11:06.264: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449640236s
Dec 31 12:11:08.277: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463046317s
Dec 31 12:11:10.294: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.480109097s
STEP: Saw pod success
Dec 31 12:11:10.294: INFO: Pod "var-expansion-9b413153-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:11:10.307: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-9b413153-2bc6-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 12:11:10.392: INFO: Waiting for pod var-expansion-9b413153-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:11:10.399: INFO: Pod var-expansion-9b413153-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:11:10.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-rdkg8" for this suite.
Dec 31 12:11:16.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:11:16.643: INFO: namespace: e2e-tests-var-expansion-rdkg8, resource: bindings, ignored listing per whitelist
Dec 31 12:11:16.656: INFO: namespace e2e-tests-var-expansion-rdkg8 deletion completed in 6.244013631s

• [SLOW TEST:17.124 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:11:16.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1231 12:11:19.060452       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 12:11:19.060: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:11:19.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m4jcf" for this suite.
Dec 31 12:11:29.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:11:29.740: INFO: namespace: e2e-tests-gc-m4jcf, resource: bindings, ignored listing per whitelist
Dec 31 12:11:29.912: INFO: namespace e2e-tests-gc-m4jcf deletion completed in 10.848934206s

• [SLOW TEST:13.256 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:11:29.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ad785e69-2bc6-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:11:30.251: INFO: Waiting up to 5m0s for pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-cv6tb" to be "success or failure"
Dec 31 12:11:30.269: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.127857ms
Dec 31 12:11:32.400: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148730116s
Dec 31 12:11:34.410: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15903993s
Dec 31 12:11:36.798: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.546648284s
Dec 31 12:11:38.813: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56160898s
Dec 31 12:11:40.827: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.575555615s
STEP: Saw pod success
Dec 31 12:11:40.827: INFO: Pod "pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:11:40.832: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 31 12:11:42.001: INFO: Waiting for pod pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:11:42.014: INFO: Pod pod-secrets-ad7c55b4-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:11:42.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cv6tb" for this suite.
Dec 31 12:11:48.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:11:48.258: INFO: namespace: e2e-tests-secrets-cv6tb, resource: bindings, ignored listing per whitelist
Dec 31 12:11:48.269: INFO: namespace e2e-tests-secrets-cv6tb deletion completed in 6.224228695s

• [SLOW TEST:18.357 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:11:48.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:11:48.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-7qkbl" to be "success or failure"
Dec 31 12:11:48.767: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.880452ms
Dec 31 12:11:50.780: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030092196s
Dec 31 12:11:52.797: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046704653s
Dec 31 12:11:55.041: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.290418154s
Dec 31 12:11:57.075: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324495551s
Dec 31 12:11:59.091: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.340728121s
STEP: Saw pod success
Dec 31 12:11:59.091: INFO: Pod "downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:11:59.096: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:11:59.327: INFO: Waiting for pod downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005 to disappear
Dec 31 12:11:59.347: INFO: Pod downwardapi-volume-b86bc5a8-2bc6-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:11:59.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7qkbl" for this suite.
Dec 31 12:12:07.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:12:07.575: INFO: namespace: e2e-tests-downward-api-7qkbl, resource: bindings, ignored listing per whitelist
Dec 31 12:12:07.595: INFO: namespace e2e-tests-downward-api-7qkbl deletion completed in 8.241966242s

• [SLOW TEST:19.326 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:12:07.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 31 12:12:16.393: INFO: 10 pods remaining
Dec 31 12:12:16.394: INFO: 10 pods has nil DeletionTimestamp
Dec 31 12:12:16.394: INFO: 
Dec 31 12:12:18.463: INFO: 6 pods remaining
Dec 31 12:12:18.463: INFO: 3 pods has nil DeletionTimestamp
Dec 31 12:12:18.463: INFO: 
Dec 31 12:12:18.784: INFO: 0 pods remaining
Dec 31 12:12:18.784: INFO: 0 pods has nil DeletionTimestamp
Dec 31 12:12:18.784: INFO: 
STEP: Gathering metrics
W1231 12:12:19.765983       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 12:12:19.766: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:12:19.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fchss" for this suite.
Dec 31 12:12:31.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:12:31.937: INFO: namespace: e2e-tests-gc-fchss, resource: bindings, ignored listing per whitelist
Dec 31 12:12:32.108: INFO: namespace e2e-tests-gc-fchss deletion completed in 12.32941728s

• [SLOW TEST:24.512 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:12:32.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 12:12:32.350: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 31 12:12:32.430: INFO: Number of nodes with available pods: 0
Dec 31 12:12:32.430: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 31 12:12:32.519: INFO: Number of nodes with available pods: 0
Dec 31 12:12:32.519: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:33.541: INFO: Number of nodes with available pods: 0
Dec 31 12:12:33.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:34.562: INFO: Number of nodes with available pods: 0
Dec 31 12:12:34.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:35.904: INFO: Number of nodes with available pods: 0
Dec 31 12:12:35.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:36.546: INFO: Number of nodes with available pods: 0
Dec 31 12:12:36.546: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:37.539: INFO: Number of nodes with available pods: 0
Dec 31 12:12:37.539: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:38.993: INFO: Number of nodes with available pods: 0
Dec 31 12:12:38.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:39.728: INFO: Number of nodes with available pods: 0
Dec 31 12:12:39.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:40.548: INFO: Number of nodes with available pods: 0
Dec 31 12:12:40.548: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:41.536: INFO: Number of nodes with available pods: 0
Dec 31 12:12:41.536: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:42.586: INFO: Number of nodes with available pods: 1
Dec 31 12:12:42.586: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 31 12:12:42.828: INFO: Number of nodes with available pods: 1
Dec 31 12:12:42.828: INFO: Number of running nodes: 0, number of available pods: 1
Dec 31 12:12:43.857: INFO: Number of nodes with available pods: 0
Dec 31 12:12:43.857: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 31 12:12:43.902: INFO: Number of nodes with available pods: 0
Dec 31 12:12:43.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:44.918: INFO: Number of nodes with available pods: 0
Dec 31 12:12:44.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:46.555: INFO: Number of nodes with available pods: 0
Dec 31 12:12:46.555: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:46.918: INFO: Number of nodes with available pods: 0
Dec 31 12:12:46.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:48.299: INFO: Number of nodes with available pods: 0
Dec 31 12:12:48.299: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:48.918: INFO: Number of nodes with available pods: 0
Dec 31 12:12:48.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:49.910: INFO: Number of nodes with available pods: 0
Dec 31 12:12:49.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:50.933: INFO: Number of nodes with available pods: 0
Dec 31 12:12:50.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:52.453: INFO: Number of nodes with available pods: 0
Dec 31 12:12:52.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:53.024: INFO: Number of nodes with available pods: 0
Dec 31 12:12:53.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:53.925: INFO: Number of nodes with available pods: 0
Dec 31 12:12:53.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:54.977: INFO: Number of nodes with available pods: 0
Dec 31 12:12:54.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:56.354: INFO: Number of nodes with available pods: 0
Dec 31 12:12:56.354: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:57.042: INFO: Number of nodes with available pods: 0
Dec 31 12:12:57.042: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:58.098: INFO: Number of nodes with available pods: 0
Dec 31 12:12:58.098: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:58.924: INFO: Number of nodes with available pods: 0
Dec 31 12:12:58.924: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 31 12:12:59.918: INFO: Number of nodes with available pods: 1
Dec 31 12:12:59.919: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9hqdw, will wait for the garbage collector to delete the pods
Dec 31 12:13:00.001: INFO: Deleting DaemonSet.extensions daemon-set took: 16.560894ms
Dec 31 12:13:00.101: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.460919ms
Dec 31 12:13:06.763: INFO: Number of nodes with available pods: 0
Dec 31 12:13:06.763: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 12:13:06.807: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9hqdw/daemonsets","resourceVersion":"16682109"},"items":null}

Dec 31 12:13:06.828: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9hqdw/pods","resourceVersion":"16682110"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:13:06.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-9hqdw" for this suite.
Dec 31 12:13:15.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:13:15.112: INFO: namespace: e2e-tests-daemonsets-9hqdw, resource: bindings, ignored listing per whitelist
Dec 31 12:13:15.229: INFO: namespace e2e-tests-daemonsets-9hqdw deletion completed in 8.270091915s

• [SLOW TEST:43.121 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:13:15.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-bm88
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 12:13:15.552: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bm88" in namespace "e2e-tests-subpath-5dfnh" to be "success or failure"
Dec 31 12:13:15.591: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 38.947544ms
Dec 31 12:13:18.010: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457209402s
Dec 31 12:13:20.042: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489776435s
Dec 31 12:13:22.115: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562198122s
Dec 31 12:13:24.143: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590926625s
Dec 31 12:13:26.245: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.692105511s
Dec 31 12:13:28.260: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 12.707436797s
Dec 31 12:13:30.317: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Pending", Reason="", readiness=false. Elapsed: 14.764636596s
Dec 31 12:13:32.370: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 16.817088335s
Dec 31 12:13:34.395: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 18.842751736s
Dec 31 12:13:36.414: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 20.861917572s
Dec 31 12:13:38.437: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 22.884629152s
Dec 31 12:13:40.454: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 24.90158135s
Dec 31 12:13:42.479: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 26.92690318s
Dec 31 12:13:44.514: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 28.961271387s
Dec 31 12:13:46.545: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 30.993053088s
Dec 31 12:13:48.587: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 33.034928293s
Dec 31 12:13:50.618: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Running", Reason="", readiness=false. Elapsed: 35.065948537s
Dec 31 12:13:52.672: INFO: Pod "pod-subpath-test-secret-bm88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.119893375s
STEP: Saw pod success
Dec 31 12:13:52.672: INFO: Pod "pod-subpath-test-secret-bm88" satisfied condition "success or failure"
Dec 31 12:13:52.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-bm88 container test-container-subpath-secret-bm88: 
STEP: delete the pod
Dec 31 12:13:53.199: INFO: Waiting for pod pod-subpath-test-secret-bm88 to disappear
Dec 31 12:13:53.248: INFO: Pod pod-subpath-test-secret-bm88 no longer exists
STEP: Deleting pod pod-subpath-test-secret-bm88
Dec 31 12:13:53.248: INFO: Deleting pod "pod-subpath-test-secret-bm88" in namespace "e2e-tests-subpath-5dfnh"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:13:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5dfnh" for this suite.
Dec 31 12:14:01.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:14:01.490: INFO: namespace: e2e-tests-subpath-5dfnh, resource: bindings, ignored listing per whitelist
Dec 31 12:14:01.541: INFO: namespace e2e-tests-subpath-5dfnh deletion completed in 8.273499579s

• [SLOW TEST:46.312 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:14:01.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 12:14:01.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-96kpg'
Dec 31 12:14:01.902: INFO: stderr: ""
Dec 31 12:14:01.902: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 31 12:14:01.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-96kpg'
Dec 31 12:14:03.718: INFO: stderr: ""
Dec 31 12:14:03.718: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:14:03.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-96kpg" for this suite.
Dec 31 12:14:09.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:14:10.242: INFO: namespace: e2e-tests-kubectl-96kpg, resource: bindings, ignored listing per whitelist
Dec 31 12:14:10.301: INFO: namespace e2e-tests-kubectl-96kpg deletion completed in 6.403021341s

• [SLOW TEST:8.760 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:14:10.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 31 12:14:10.484: INFO: PodSpec: initContainers in spec.initContainers
Dec 31 12:15:18.798: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0d0082ba-2bc7-11ea-a129-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-zdt6r", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-zdt6r/pods/pod-init-0d0082ba-2bc7-11ea-a129-0242ac110005", UID:"0d062044-2bc7-11ea-a994-fa163e34d433", ResourceVersion:"16682374", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713391250, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"484516169"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4sxj9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020a0000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4sxj9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4sxj9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4sxj9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001db4238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a06120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001db45d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001db45f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001db45f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001db45fc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713391250, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713391250, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713391250, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713391250, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0010113c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001292e70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001292ee0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a54dd706e843825d1aab4c303219a58de1f518e8f3d3c38cbb3044950310391e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00125e180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00125e0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:15:18.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-zdt6r" for this suite.
Dec 31 12:15:42.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:15:43.028: INFO: namespace: e2e-tests-init-container-zdt6r, resource: bindings, ignored listing per whitelist
Dec 31 12:15:43.198: INFO: namespace e2e-tests-init-container-zdt6r deletion completed in 24.251337249s

• [SLOW TEST:92.895 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:15:43.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 31 12:15:43.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:45.953: INFO: stderr: ""
Dec 31 12:15:45.953: INFO: stdout: "pod/pause created\n"
Dec 31 12:15:45.953: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 31 12:15:45.953: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tdvfr" to be "running and ready"
Dec 31 12:15:46.053: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 99.686008ms
Dec 31 12:15:48.414: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460272344s
Dec 31 12:15:50.461: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507284553s
Dec 31 12:15:52.609: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65567172s
Dec 31 12:15:54.630: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676095762s
Dec 31 12:15:56.645: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.691722431s
Dec 31 12:15:56.645: INFO: Pod "pause" satisfied condition "running and ready"
Dec 31 12:15:56.645: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 31 12:15:56.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:56.917: INFO: stderr: ""
Dec 31 12:15:56.917: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 31 12:15:56.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:57.082: INFO: stderr: ""
Dec 31 12:15:57.082: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 31 12:15:57.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:57.263: INFO: stderr: ""
Dec 31 12:15:57.263: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 31 12:15:57.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:57.375: INFO: stderr: ""
Dec 31 12:15:57.375: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 31 12:15:57.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:57.541: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 12:15:57.541: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 31 12:15:57.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tdvfr'
Dec 31 12:15:57.949: INFO: stderr: "No resources found.\n"
Dec 31 12:15:57.949: INFO: stdout: ""
Dec 31 12:15:57.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tdvfr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 31 12:15:58.114: INFO: stderr: ""
Dec 31 12:15:58.114: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:15:58.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tdvfr" for this suite.
Dec 31 12:16:04.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:16:05.014: INFO: namespace: e2e-tests-kubectl-tdvfr, resource: bindings, ignored listing per whitelist
Dec 31 12:16:05.130: INFO: namespace e2e-tests-kubectl-tdvfr deletion completed in 6.625586695s

• [SLOW TEST:21.932 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:16:05.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 31 12:16:05.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 31 12:16:05.605: INFO: stderr: ""
Dec 31 12:16:05.605: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:16:05.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-566v7" for this suite.
Dec 31 12:16:11.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:16:11.747: INFO: namespace: e2e-tests-kubectl-566v7, resource: bindings, ignored listing per whitelist
Dec 31 12:16:11.811: INFO: namespace e2e-tests-kubectl-566v7 deletion completed in 6.191796973s

• [SLOW TEST:6.680 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:16:11.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1231 12:16:27.186330       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 12:16:27.186: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:16:27.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-gk5j9" for this suite.
Dec 31 12:16:50.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:16:50.043: INFO: namespace: e2e-tests-gc-gk5j9, resource: bindings, ignored listing per whitelist
Dec 31 12:16:50.132: INFO: namespace e2e-tests-gc-gk5j9 deletion completed in 22.933404016s

• [SLOW TEST:38.321 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:16:50.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:16:50.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6xz9w" for this suite.
Dec 31 12:16:56.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:16:56.856: INFO: namespace: e2e-tests-kubelet-test-6xz9w, resource: bindings, ignored listing per whitelist
Dec 31 12:16:56.872: INFO: namespace e2e-tests-kubelet-test-6xz9w deletion completed in 6.279225032s

• [SLOW TEST:6.740 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:16:56.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1231 12:17:07.190208       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 12:17:07.190: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:17:07.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-68j9d" for this suite.
Dec 31 12:17:13.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:17:13.507: INFO: namespace: e2e-tests-gc-68j9d, resource: bindings, ignored listing per whitelist
Dec 31 12:17:13.788: INFO: namespace e2e-tests-gc-68j9d deletion completed in 6.587429825s

• [SLOW TEST:16.916 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:17:13.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 31 12:17:14.100: INFO: Waiting up to 5m0s for pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-xbtvv" to be "success or failure"
Dec 31 12:17:14.226: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.052198ms
Dec 31 12:17:16.241: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140383565s
Dec 31 12:17:18.282: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182066388s
Dec 31 12:17:20.302: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201611231s
Dec 31 12:17:22.784: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68363223s
Dec 31 12:17:24.802: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.701574865s
Dec 31 12:17:26.820: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.719310716s
STEP: Saw pod success
Dec 31 12:17:26.820: INFO: Pod "pod-7a6b039f-2bc7-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:17:26.834: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7a6b039f-2bc7-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 12:17:27.762: INFO: Waiting for pod pod-7a6b039f-2bc7-11ea-a129-0242ac110005 to disappear
Dec 31 12:17:28.121: INFO: Pod pod-7a6b039f-2bc7-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:17:28.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xbtvv" for this suite.
Dec 31 12:17:34.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:17:34.404: INFO: namespace: e2e-tests-emptydir-xbtvv, resource: bindings, ignored listing per whitelist
Dec 31 12:17:34.408: INFO: namespace e2e-tests-emptydir-xbtvv deletion completed in 6.269899176s

• [SLOW TEST:20.619 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:17:34.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-22rhq
I1231 12:17:34.679523       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-22rhq, replica count: 1
I1231 12:17:35.730188       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:36.730543       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:37.731088       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:38.731902       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:39.732642       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:40.733050       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:41.733343       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:42.733720       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:43.734163       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 12:17:44.734777       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 31 12:17:44.898: INFO: Created: latency-svc-lr95c
Dec 31 12:17:44.932: INFO: Got endpoints: latency-svc-lr95c [96.818386ms]
Dec 31 12:17:45.101: INFO: Created: latency-svc-dmnmm
Dec 31 12:17:45.125: INFO: Got endpoints: latency-svc-dmnmm [192.733925ms]
Dec 31 12:17:45.317: INFO: Created: latency-svc-wxfgq
Dec 31 12:17:45.345: INFO: Got endpoints: latency-svc-wxfgq [412.47003ms]
Dec 31 12:17:45.512: INFO: Created: latency-svc-2slhg
Dec 31 12:17:45.687: INFO: Got endpoints: latency-svc-2slhg [755.022279ms]
Dec 31 12:17:45.709: INFO: Created: latency-svc-bfkzh
Dec 31 12:17:45.723: INFO: Got endpoints: latency-svc-bfkzh [789.956422ms]
Dec 31 12:17:45.783: INFO: Created: latency-svc-pw2sp
Dec 31 12:17:45.895: INFO: Got endpoints: latency-svc-pw2sp [962.553591ms]
Dec 31 12:17:45.922: INFO: Created: latency-svc-84fkq
Dec 31 12:17:45.976: INFO: Got endpoints: latency-svc-84fkq [1.043078373s]
Dec 31 12:17:46.156: INFO: Created: latency-svc-mnzsn
Dec 31 12:17:46.184: INFO: Got endpoints: latency-svc-mnzsn [1.250356201s]
Dec 31 12:17:46.237: INFO: Created: latency-svc-cqkzh
Dec 31 12:17:46.368: INFO: Got endpoints: latency-svc-cqkzh [1.434994017s]
Dec 31 12:17:46.413: INFO: Created: latency-svc-njxlr
Dec 31 12:17:46.468: INFO: Got endpoints: latency-svc-njxlr [1.534306377s]
Dec 31 12:17:46.598: INFO: Created: latency-svc-9rngt
Dec 31 12:17:46.623: INFO: Got endpoints: latency-svc-9rngt [1.689132384s]
Dec 31 12:17:46.754: INFO: Created: latency-svc-pmsx6
Dec 31 12:17:46.793: INFO: Got endpoints: latency-svc-pmsx6 [1.859797848s]
Dec 31 12:17:46.842: INFO: Created: latency-svc-bgszv
Dec 31 12:17:46.874: INFO: Got endpoints: latency-svc-bgszv [1.941032071s]
Dec 31 12:17:47.087: INFO: Created: latency-svc-mvgg9
Dec 31 12:17:47.103: INFO: Got endpoints: latency-svc-mvgg9 [2.170373076s]
Dec 31 12:17:47.339: INFO: Created: latency-svc-s9f6x
Dec 31 12:17:47.374: INFO: Got endpoints: latency-svc-s9f6x [2.440862016s]
Dec 31 12:17:47.513: INFO: Created: latency-svc-r9djf
Dec 31 12:17:47.543: INFO: Got endpoints: latency-svc-r9djf [2.610507362s]
Dec 31 12:17:47.751: INFO: Created: latency-svc-sq2fv
Dec 31 12:17:47.763: INFO: Got endpoints: latency-svc-sq2fv [2.637876521s]
Dec 31 12:17:47.831: INFO: Created: latency-svc-d7xmb
Dec 31 12:17:47.960: INFO: Got endpoints: latency-svc-d7xmb [2.614285624s]
Dec 31 12:17:47.983: INFO: Created: latency-svc-9mb7q
Dec 31 12:17:48.020: INFO: Got endpoints: latency-svc-9mb7q [2.332400095s]
Dec 31 12:17:48.218: INFO: Created: latency-svc-lkkgr
Dec 31 12:17:48.244: INFO: Got endpoints: latency-svc-lkkgr [2.521368511s]
Dec 31 12:17:48.288: INFO: Created: latency-svc-5rtx4
Dec 31 12:17:48.315: INFO: Got endpoints: latency-svc-5rtx4 [2.419295356s]
Dec 31 12:17:48.469: INFO: Created: latency-svc-rbdcx
Dec 31 12:17:48.469: INFO: Got endpoints: latency-svc-rbdcx [2.492286864s]
Dec 31 12:17:49.091: INFO: Created: latency-svc-bd7qs
Dec 31 12:17:49.109: INFO: Got endpoints: latency-svc-bd7qs [2.925442761s]
Dec 31 12:17:49.318: INFO: Created: latency-svc-bxh4f
Dec 31 12:17:49.340: INFO: Got endpoints: latency-svc-bxh4f [2.971875187s]
Dec 31 12:17:49.505: INFO: Created: latency-svc-fdwg2
Dec 31 12:17:49.509: INFO: Got endpoints: latency-svc-fdwg2 [3.040800205s]
Dec 31 12:17:49.586: INFO: Created: latency-svc-dspds
Dec 31 12:17:49.686: INFO: Got endpoints: latency-svc-dspds [3.063538136s]
Dec 31 12:17:49.740: INFO: Created: latency-svc-x7t9d
Dec 31 12:17:49.908: INFO: Got endpoints: latency-svc-x7t9d [3.115002355s]
Dec 31 12:17:50.066: INFO: Created: latency-svc-mxbpm
Dec 31 12:17:50.084: INFO: Got endpoints: latency-svc-mxbpm [3.209651974s]
Dec 31 12:17:50.137: INFO: Created: latency-svc-rcfdf
Dec 31 12:17:50.149: INFO: Got endpoints: latency-svc-rcfdf [3.045608376s]
Dec 31 12:17:50.329: INFO: Created: latency-svc-blm7d
Dec 31 12:17:50.338: INFO: Got endpoints: latency-svc-blm7d [2.963512635s]
Dec 31 12:17:50.496: INFO: Created: latency-svc-zvs7p
Dec 31 12:17:50.510: INFO: Got endpoints: latency-svc-zvs7p [2.966930926s]
Dec 31 12:17:50.708: INFO: Created: latency-svc-p7vv5
Dec 31 12:17:50.714: INFO: Got endpoints: latency-svc-p7vv5 [2.950341418s]
Dec 31 12:17:50.918: INFO: Created: latency-svc-vmwnw
Dec 31 12:17:50.961: INFO: Got endpoints: latency-svc-vmwnw [3.00074047s]
Dec 31 12:17:51.217: INFO: Created: latency-svc-vtlvc
Dec 31 12:17:51.330: INFO: Got endpoints: latency-svc-vtlvc [3.309802696s]
Dec 31 12:17:51.419: INFO: Created: latency-svc-rhx9w
Dec 31 12:17:51.537: INFO: Got endpoints: latency-svc-rhx9w [3.292650093s]
Dec 31 12:17:51.563: INFO: Created: latency-svc-5m6rc
Dec 31 12:17:51.590: INFO: Got endpoints: latency-svc-5m6rc [3.275051221s]
Dec 31 12:17:51.880: INFO: Created: latency-svc-7b7gg
Dec 31 12:17:51.921: INFO: Got endpoints: latency-svc-7b7gg [3.451507629s]
Dec 31 12:17:52.258: INFO: Created: latency-svc-67fdd
Dec 31 12:17:52.311: INFO: Got endpoints: latency-svc-67fdd [3.201181463s]
Dec 31 12:17:52.644: INFO: Created: latency-svc-tlc5j
Dec 31 12:17:52.710: INFO: Got endpoints: latency-svc-tlc5j [3.369633046s]
Dec 31 12:17:52.948: INFO: Created: latency-svc-jbgsc
Dec 31 12:17:52.995: INFO: Got endpoints: latency-svc-jbgsc [3.486129596s]
Dec 31 12:17:53.181: INFO: Created: latency-svc-c6z8q
Dec 31 12:17:53.313: INFO: Got endpoints: latency-svc-c6z8q [3.626850711s]
Dec 31 12:17:53.512: INFO: Created: latency-svc-4l5s7
Dec 31 12:17:53.523: INFO: Got endpoints: latency-svc-4l5s7 [3.614476279s]
Dec 31 12:17:53.624: INFO: Created: latency-svc-pffbj
Dec 31 12:17:53.743: INFO: Got endpoints: latency-svc-pffbj [3.658117031s]
Dec 31 12:17:53.761: INFO: Created: latency-svc-q58vs
Dec 31 12:17:53.916: INFO: Got endpoints: latency-svc-q58vs [3.767484097s]
Dec 31 12:17:54.140: INFO: Created: latency-svc-c244m
Dec 31 12:17:54.163: INFO: Got endpoints: latency-svc-c244m [3.825050163s]
Dec 31 12:17:54.321: INFO: Created: latency-svc-r65f8
Dec 31 12:17:54.345: INFO: Got endpoints: latency-svc-r65f8 [3.834327805s]
Dec 31 12:17:54.530: INFO: Created: latency-svc-lzvpd
Dec 31 12:17:54.551: INFO: Got endpoints: latency-svc-lzvpd [3.837268624s]
Dec 31 12:17:54.615: INFO: Created: latency-svc-pdhqs
Dec 31 12:17:54.724: INFO: Got endpoints: latency-svc-pdhqs [3.763485975s]
Dec 31 12:17:54.729: INFO: Created: latency-svc-f2g8p
Dec 31 12:17:54.742: INFO: Got endpoints: latency-svc-f2g8p [3.412497988s]
Dec 31 12:17:54.901: INFO: Created: latency-svc-4b9dx
Dec 31 12:17:54.919: INFO: Got endpoints: latency-svc-4b9dx [3.381558218s]
Dec 31 12:17:54.999: INFO: Created: latency-svc-pwpmg
Dec 31 12:17:55.112: INFO: Created: latency-svc-x6g54
Dec 31 12:17:55.156: INFO: Got endpoints: latency-svc-pwpmg [3.565513258s]
Dec 31 12:17:55.319: INFO: Created: latency-svc-hrwpw
Dec 31 12:17:55.340: INFO: Got endpoints: latency-svc-x6g54 [3.419449035s]
Dec 31 12:17:55.374: INFO: Got endpoints: latency-svc-hrwpw [3.062456257s]
Dec 31 12:17:55.387: INFO: Created: latency-svc-7tl2s
Dec 31 12:17:55.448: INFO: Got endpoints: latency-svc-7tl2s [2.737262941s]
Dec 31 12:17:55.473: INFO: Created: latency-svc-kg7fn
Dec 31 12:17:55.510: INFO: Got endpoints: latency-svc-kg7fn [2.514835671s]
Dec 31 12:17:55.703: INFO: Created: latency-svc-gdgb2
Dec 31 12:17:55.728: INFO: Got endpoints: latency-svc-gdgb2 [2.414325863s]
Dec 31 12:17:55.978: INFO: Created: latency-svc-q8k78
Dec 31 12:17:56.008: INFO: Got endpoints: latency-svc-q8k78 [2.484641079s]
Dec 31 12:17:56.249: INFO: Created: latency-svc-mqnqr
Dec 31 12:17:56.267: INFO: Got endpoints: latency-svc-mqnqr [2.524108058s]
Dec 31 12:17:56.339: INFO: Created: latency-svc-9mm9p
Dec 31 12:17:56.450: INFO: Got endpoints: latency-svc-9mm9p [2.533786836s]
Dec 31 12:17:56.529: INFO: Created: latency-svc-m64j7
Dec 31 12:17:56.534: INFO: Got endpoints: latency-svc-m64j7 [2.371217553s]
Dec 31 12:17:56.723: INFO: Created: latency-svc-drb5l
Dec 31 12:17:56.779: INFO: Got endpoints: latency-svc-drb5l [2.433985889s]
Dec 31 12:17:56.946: INFO: Created: latency-svc-bghk9
Dec 31 12:17:56.973: INFO: Got endpoints: latency-svc-bghk9 [2.421815809s]
Dec 31 12:17:57.269: INFO: Created: latency-svc-g8lmm
Dec 31 12:17:57.476: INFO: Got endpoints: latency-svc-g8lmm [2.75175929s]
Dec 31 12:17:57.479: INFO: Created: latency-svc-kgkxh
Dec 31 12:17:57.494: INFO: Created: latency-svc-66hdd
Dec 31 12:17:57.518: INFO: Got endpoints: latency-svc-kgkxh [2.775054998s]
Dec 31 12:17:57.523: INFO: Got endpoints: latency-svc-66hdd [2.60446021s]
Dec 31 12:17:57.715: INFO: Created: latency-svc-h2frk
Dec 31 12:17:57.724: INFO: Got endpoints: latency-svc-h2frk [2.567506649s]
Dec 31 12:17:57.797: INFO: Created: latency-svc-6gkzc
Dec 31 12:17:57.901: INFO: Got endpoints: latency-svc-6gkzc [2.560730228s]
Dec 31 12:17:57.925: INFO: Created: latency-svc-s79zt
Dec 31 12:17:57.937: INFO: Got endpoints: latency-svc-s79zt [2.5637869s]
Dec 31 12:17:58.125: INFO: Created: latency-svc-bpf5f
Dec 31 12:17:58.135: INFO: Got endpoints: latency-svc-bpf5f [2.687050501s]
Dec 31 12:17:58.305: INFO: Created: latency-svc-gmm6d
Dec 31 12:17:58.334: INFO: Got endpoints: latency-svc-gmm6d [2.824124756s]
Dec 31 12:17:58.382: INFO: Created: latency-svc-wzxvg
Dec 31 12:17:58.540: INFO: Got endpoints: latency-svc-wzxvg [2.812064219s]
Dec 31 12:17:58.618: INFO: Created: latency-svc-4n8f6
Dec 31 12:17:58.704: INFO: Got endpoints: latency-svc-4n8f6 [369.450404ms]
Dec 31 12:17:58.730: INFO: Created: latency-svc-767lk
Dec 31 12:17:58.790: INFO: Got endpoints: latency-svc-767lk [2.782230649s]
Dec 31 12:17:58.946: INFO: Created: latency-svc-6cz55
Dec 31 12:17:58.974: INFO: Got endpoints: latency-svc-6cz55 [2.70730596s]
Dec 31 12:17:59.139: INFO: Created: latency-svc-htnfm
Dec 31 12:17:59.147: INFO: Got endpoints: latency-svc-htnfm [2.696263116s]
Dec 31 12:17:59.314: INFO: Created: latency-svc-hj2bm
Dec 31 12:17:59.350: INFO: Got endpoints: latency-svc-hj2bm [2.814949071s]
Dec 31 12:17:59.482: INFO: Created: latency-svc-q855g
Dec 31 12:17:59.511: INFO: Got endpoints: latency-svc-q855g [2.73149944s]
Dec 31 12:17:59.652: INFO: Created: latency-svc-nzjjh
Dec 31 12:17:59.672: INFO: Got endpoints: latency-svc-nzjjh [2.69902344s]
Dec 31 12:17:59.744: INFO: Created: latency-svc-9gxwr
Dec 31 12:17:59.744: INFO: Got endpoints: latency-svc-9gxwr [2.267341408s]
Dec 31 12:17:59.895: INFO: Created: latency-svc-5bpgb
Dec 31 12:17:59.915: INFO: Got endpoints: latency-svc-5bpgb [2.397491415s]
Dec 31 12:18:00.048: INFO: Created: latency-svc-9rl4s
Dec 31 12:18:00.067: INFO: Got endpoints: latency-svc-9rl4s [2.543951697s]
Dec 31 12:18:00.149: INFO: Created: latency-svc-qjwc4
Dec 31 12:18:00.244: INFO: Got endpoints: latency-svc-qjwc4 [2.519675298s]
Dec 31 12:18:00.271: INFO: Created: latency-svc-ckw5t
Dec 31 12:18:00.458: INFO: Got endpoints: latency-svc-ckw5t [2.556131108s]
Dec 31 12:18:00.820: INFO: Created: latency-svc-hncl2
Dec 31 12:18:01.069: INFO: Got endpoints: latency-svc-hncl2 [3.13184332s]
Dec 31 12:18:01.294: INFO: Created: latency-svc-6vmkf
Dec 31 12:18:01.369: INFO: Got endpoints: latency-svc-6vmkf [3.233975815s]
Dec 31 12:18:01.554: INFO: Created: latency-svc-s4t2v
Dec 31 12:18:01.622: INFO: Got endpoints: latency-svc-s4t2v [3.081497784s]
Dec 31 12:18:01.654: INFO: Created: latency-svc-c5ntq
Dec 31 12:18:01.659: INFO: Got endpoints: latency-svc-c5ntq [2.954816275s]
Dec 31 12:18:01.712: INFO: Created: latency-svc-pmbbw
Dec 31 12:18:01.869: INFO: Got endpoints: latency-svc-pmbbw [3.077976361s]
Dec 31 12:18:01.901: INFO: Created: latency-svc-wgwfs
Dec 31 12:18:01.913: INFO: Got endpoints: latency-svc-wgwfs [2.938762906s]
Dec 31 12:18:01.985: INFO: Created: latency-svc-9dffb
Dec 31 12:18:02.073: INFO: Got endpoints: latency-svc-9dffb [2.925737535s]
Dec 31 12:18:02.294: INFO: Created: latency-svc-gmpxb
Dec 31 12:18:02.319: INFO: Got endpoints: latency-svc-gmpxb [2.968760039s]
Dec 31 12:18:02.470: INFO: Created: latency-svc-rjv9f
Dec 31 12:18:02.662: INFO: Got endpoints: latency-svc-rjv9f [3.150904189s]
Dec 31 12:18:02.683: INFO: Created: latency-svc-mt624
Dec 31 12:18:02.704: INFO: Got endpoints: latency-svc-mt624 [3.031226505s]
Dec 31 12:18:02.766: INFO: Created: latency-svc-xpnbl
Dec 31 12:18:02.838: INFO: Got endpoints: latency-svc-xpnbl [3.094410976s]
Dec 31 12:18:02.899: INFO: Created: latency-svc-nwth2
Dec 31 12:18:02.917: INFO: Got endpoints: latency-svc-nwth2 [3.001951982s]
Dec 31 12:18:03.095: INFO: Created: latency-svc-6fvx4
Dec 31 12:18:03.109: INFO: Got endpoints: latency-svc-6fvx4 [3.041438813s]
Dec 31 12:18:03.172: INFO: Created: latency-svc-g85vt
Dec 31 12:18:03.285: INFO: Got endpoints: latency-svc-g85vt [3.041419303s]
Dec 31 12:18:03.308: INFO: Created: latency-svc-zkxs6
Dec 31 12:18:03.313: INFO: Got endpoints: latency-svc-zkxs6 [2.855392767s]
Dec 31 12:18:03.374: INFO: Created: latency-svc-w7rlg
Dec 31 12:18:03.502: INFO: Got endpoints: latency-svc-w7rlg [2.432178986s]
Dec 31 12:18:03.566: INFO: Created: latency-svc-l6kpn
Dec 31 12:18:03.596: INFO: Got endpoints: latency-svc-l6kpn [2.22639765s]
Dec 31 12:18:03.760: INFO: Created: latency-svc-pmtkf
Dec 31 12:18:03.775: INFO: Got endpoints: latency-svc-pmtkf [2.152772294s]
Dec 31 12:18:03.943: INFO: Created: latency-svc-jq4cd
Dec 31 12:18:03.960: INFO: Got endpoints: latency-svc-jq4cd [2.301044345s]
Dec 31 12:18:04.138: INFO: Created: latency-svc-wp668
Dec 31 12:18:04.143: INFO: Got endpoints: latency-svc-wp668 [2.27464469s]
Dec 31 12:18:04.334: INFO: Created: latency-svc-kzmgq
Dec 31 12:18:04.358: INFO: Got endpoints: latency-svc-kzmgq [2.444375234s]
Dec 31 12:18:04.501: INFO: Created: latency-svc-b4dvh
Dec 31 12:18:04.541: INFO: Got endpoints: latency-svc-b4dvh [2.468511634s]
Dec 31 12:18:04.868: INFO: Created: latency-svc-dqff9
Dec 31 12:18:04.958: INFO: Got endpoints: latency-svc-dqff9 [2.639601529s]
Dec 31 12:18:05.106: INFO: Created: latency-svc-8xn25
Dec 31 12:18:05.139: INFO: Got endpoints: latency-svc-8xn25 [2.476659593s]
Dec 31 12:18:05.380: INFO: Created: latency-svc-zkhgs
Dec 31 12:18:05.442: INFO: Created: latency-svc-pmjm8
Dec 31 12:18:05.607: INFO: Created: latency-svc-7jml4
Dec 31 12:18:05.726: INFO: Got endpoints: latency-svc-zkhgs [3.02259061s]
Dec 31 12:18:05.737: INFO: Got endpoints: latency-svc-pmjm8 [2.898656672s]
Dec 31 12:18:05.738: INFO: Created: latency-svc-z7m4f
Dec 31 12:18:05.764: INFO: Got endpoints: latency-svc-z7m4f [2.655269825s]
Dec 31 12:18:05.764: INFO: Got endpoints: latency-svc-7jml4 [2.846784812s]
Dec 31 12:18:05.890: INFO: Created: latency-svc-9ktfn
Dec 31 12:18:05.894: INFO: Got endpoints: latency-svc-9ktfn [2.608118226s]
Dec 31 12:18:05.942: INFO: Created: latency-svc-k6zz6
Dec 31 12:18:05.964: INFO: Got endpoints: latency-svc-k6zz6 [2.650642109s]
Dec 31 12:18:06.065: INFO: Created: latency-svc-xjrkj
Dec 31 12:18:06.083: INFO: Got endpoints: latency-svc-xjrkj [2.581105444s]
Dec 31 12:18:06.323: INFO: Created: latency-svc-lgzwk
Dec 31 12:18:06.342: INFO: Got endpoints: latency-svc-lgzwk [2.745705218s]
Dec 31 12:18:06.521: INFO: Created: latency-svc-kjhcd
Dec 31 12:18:06.542: INFO: Got endpoints: latency-svc-kjhcd [2.766942413s]
Dec 31 12:18:06.708: INFO: Created: latency-svc-bv4c4
Dec 31 12:18:06.724: INFO: Got endpoints: latency-svc-bv4c4 [2.764031046s]
Dec 31 12:18:06.790: INFO: Created: latency-svc-8vsvm
Dec 31 12:18:06.887: INFO: Got endpoints: latency-svc-8vsvm [2.743031914s]
Dec 31 12:18:06.906: INFO: Created: latency-svc-m8gs9
Dec 31 12:18:06.947: INFO: Got endpoints: latency-svc-m8gs9 [2.588630721s]
Dec 31 12:18:07.122: INFO: Created: latency-svc-fj7fs
Dec 31 12:18:07.122: INFO: Got endpoints: latency-svc-fj7fs [2.580826077s]
Dec 31 12:18:07.149: INFO: Created: latency-svc-hdj8p
Dec 31 12:18:07.168: INFO: Got endpoints: latency-svc-hdj8p [2.209122403s]
Dec 31 12:18:07.331: INFO: Created: latency-svc-mxrgt
Dec 31 12:18:07.357: INFO: Got endpoints: latency-svc-mxrgt [2.218109361s]
Dec 31 12:18:07.472: INFO: Created: latency-svc-c5nkt
Dec 31 12:18:07.495: INFO: Got endpoints: latency-svc-c5nkt [1.767975644s]
Dec 31 12:18:07.577: INFO: Created: latency-svc-dtlvt
Dec 31 12:18:07.655: INFO: Got endpoints: latency-svc-dtlvt [1.917398613s]
Dec 31 12:18:07.741: INFO: Created: latency-svc-jskwv
Dec 31 12:18:07.898: INFO: Got endpoints: latency-svc-jskwv [2.133304375s]
Dec 31 12:18:07.924: INFO: Created: latency-svc-dtqmc
Dec 31 12:18:07.966: INFO: Got endpoints: latency-svc-dtqmc [2.201564477s]
Dec 31 12:18:08.115: INFO: Created: latency-svc-khdsc
Dec 31 12:18:08.166: INFO: Got endpoints: latency-svc-khdsc [2.272044321s]
Dec 31 12:18:08.199: INFO: Created: latency-svc-9lppc
Dec 31 12:18:08.342: INFO: Created: latency-svc-qzpcr
Dec 31 12:18:08.355: INFO: Got endpoints: latency-svc-9lppc [2.391108593s]
Dec 31 12:18:08.372: INFO: Got endpoints: latency-svc-qzpcr [2.28898782s]
Dec 31 12:18:08.418: INFO: Created: latency-svc-lhfk8
Dec 31 12:18:08.529: INFO: Got endpoints: latency-svc-lhfk8 [2.187352681s]
Dec 31 12:18:08.608: INFO: Created: latency-svc-zl282
Dec 31 12:18:08.730: INFO: Got endpoints: latency-svc-zl282 [2.187808077s]
Dec 31 12:18:08.757: INFO: Created: latency-svc-wf7d7
Dec 31 12:18:08.802: INFO: Got endpoints: latency-svc-wf7d7 [2.077732146s]
Dec 31 12:18:08.918: INFO: Created: latency-svc-cnpn5
Dec 31 12:18:08.930: INFO: Got endpoints: latency-svc-cnpn5 [2.042627251s]
Dec 31 12:18:08.996: INFO: Created: latency-svc-9dhqn
Dec 31 12:18:09.139: INFO: Got endpoints: latency-svc-9dhqn [2.192115583s]
Dec 31 12:18:09.217: INFO: Created: latency-svc-w7g6q
Dec 31 12:18:09.338: INFO: Got endpoints: latency-svc-w7g6q [2.214876206s]
Dec 31 12:18:09.379: INFO: Created: latency-svc-zfng6
Dec 31 12:18:09.380: INFO: Got endpoints: latency-svc-zfng6 [2.211356937s]
Dec 31 12:18:09.661: INFO: Created: latency-svc-tkqqt
Dec 31 12:18:09.663: INFO: Got endpoints: latency-svc-tkqqt [2.305940238s]
Dec 31 12:18:09.932: INFO: Created: latency-svc-lkvzc
Dec 31 12:18:09.946: INFO: Got endpoints: latency-svc-lkvzc [2.451727439s]
Dec 31 12:18:10.022: INFO: Created: latency-svc-whkvg
Dec 31 12:18:10.101: INFO: Got endpoints: latency-svc-whkvg [2.446464676s]
Dec 31 12:18:10.128: INFO: Created: latency-svc-vh68t
Dec 31 12:18:10.142: INFO: Got endpoints: latency-svc-vh68t [2.243703063s]
Dec 31 12:18:10.327: INFO: Created: latency-svc-nj54c
Dec 31 12:18:10.352: INFO: Got endpoints: latency-svc-nj54c [2.385354099s]
Dec 31 12:18:10.403: INFO: Created: latency-svc-vklr4
Dec 31 12:18:10.411: INFO: Got endpoints: latency-svc-vklr4 [2.244919785s]
Dec 31 12:18:10.524: INFO: Created: latency-svc-s5hbt
Dec 31 12:18:10.556: INFO: Got endpoints: latency-svc-s5hbt [2.200572103s]
Dec 31 12:18:10.691: INFO: Created: latency-svc-kff6n
Dec 31 12:18:10.715: INFO: Got endpoints: latency-svc-kff6n [2.342634029s]
Dec 31 12:18:10.769: INFO: Created: latency-svc-4hc62
Dec 31 12:18:10.849: INFO: Got endpoints: latency-svc-4hc62 [2.319834445s]
Dec 31 12:18:10.888: INFO: Created: latency-svc-ghcbz
Dec 31 12:18:11.081: INFO: Got endpoints: latency-svc-ghcbz [2.350584987s]
Dec 31 12:18:11.152: INFO: Created: latency-svc-2lrdk
Dec 31 12:18:11.271: INFO: Got endpoints: latency-svc-2lrdk [2.468695847s]
Dec 31 12:18:11.328: INFO: Created: latency-svc-kp528
Dec 31 12:18:11.339: INFO: Got endpoints: latency-svc-kp528 [2.409204759s]
Dec 31 12:18:11.487: INFO: Created: latency-svc-9hqmj
Dec 31 12:18:11.487: INFO: Got endpoints: latency-svc-9hqmj [2.347475776s]
Dec 31 12:18:11.591: INFO: Created: latency-svc-8lzbp
Dec 31 12:18:11.655: INFO: Got endpoints: latency-svc-8lzbp [2.316685863s]
Dec 31 12:18:11.742: INFO: Created: latency-svc-f9vnn
Dec 31 12:18:11.878: INFO: Got endpoints: latency-svc-f9vnn [2.498200298s]
Dec 31 12:18:11.912: INFO: Created: latency-svc-fttcx
Dec 31 12:18:11.945: INFO: Got endpoints: latency-svc-fttcx [2.281256505s]
Dec 31 12:18:12.100: INFO: Created: latency-svc-h7fmz
Dec 31 12:18:12.132: INFO: Got endpoints: latency-svc-h7fmz [2.185428607s]
Dec 31 12:18:12.323: INFO: Created: latency-svc-96xr2
Dec 31 12:18:12.336: INFO: Got endpoints: latency-svc-96xr2 [2.234127014s]
Dec 31 12:18:12.407: INFO: Created: latency-svc-7kxbz
Dec 31 12:18:12.500: INFO: Got endpoints: latency-svc-7kxbz [2.357738201s]
Dec 31 12:18:14.039: INFO: Created: latency-svc-645cl
Dec 31 12:18:14.079: INFO: Got endpoints: latency-svc-645cl [3.72695005s]
Dec 31 12:18:14.310: INFO: Created: latency-svc-vjhtt
Dec 31 12:18:14.343: INFO: Got endpoints: latency-svc-vjhtt [3.931471808s]
Dec 31 12:18:14.555: INFO: Created: latency-svc-5szp4
Dec 31 12:18:14.578: INFO: Got endpoints: latency-svc-5szp4 [4.021262568s]
Dec 31 12:18:14.709: INFO: Created: latency-svc-5f9qp
Dec 31 12:18:14.734: INFO: Got endpoints: latency-svc-5f9qp [4.01877554s]
Dec 31 12:18:14.813: INFO: Created: latency-svc-9h9wl
Dec 31 12:18:14.885: INFO: Got endpoints: latency-svc-9h9wl [4.035227808s]
Dec 31 12:18:14.915: INFO: Created: latency-svc-96sfx
Dec 31 12:18:14.925: INFO: Got endpoints: latency-svc-96sfx [3.843410092s]
Dec 31 12:18:15.075: INFO: Created: latency-svc-wcvq7
Dec 31 12:18:15.130: INFO: Got endpoints: latency-svc-wcvq7 [3.858237451s]
Dec 31 12:18:15.359: INFO: Created: latency-svc-4w9rm
Dec 31 12:18:15.396: INFO: Got endpoints: latency-svc-4w9rm [4.057052347s]
Dec 31 12:18:15.520: INFO: Created: latency-svc-rcgbw
Dec 31 12:18:15.561: INFO: Got endpoints: latency-svc-rcgbw [4.074285721s]
Dec 31 12:18:15.706: INFO: Created: latency-svc-q8phd
Dec 31 12:18:15.753: INFO: Got endpoints: latency-svc-q8phd [4.098338837s]
Dec 31 12:18:15.760: INFO: Created: latency-svc-2chb7
Dec 31 12:18:15.835: INFO: Got endpoints: latency-svc-2chb7 [3.95708897s]
Dec 31 12:18:15.963: INFO: Created: latency-svc-hhdfr
Dec 31 12:18:16.066: INFO: Got endpoints: latency-svc-hhdfr [4.120824205s]
Dec 31 12:18:16.090: INFO: Created: latency-svc-nfxrp
Dec 31 12:18:16.164: INFO: Got endpoints: latency-svc-nfxrp [4.031294643s]
Dec 31 12:18:16.321: INFO: Created: latency-svc-rrmh2
Dec 31 12:18:16.345: INFO: Got endpoints: latency-svc-rrmh2 [4.009512595s]
Dec 31 12:18:16.410: INFO: Created: latency-svc-p6xj8
Dec 31 12:18:16.530: INFO: Got endpoints: latency-svc-p6xj8 [4.030518217s]
Dec 31 12:18:16.550: INFO: Created: latency-svc-4s9sm
Dec 31 12:18:16.685: INFO: Got endpoints: latency-svc-4s9sm [2.606048878s]
Dec 31 12:18:16.737: INFO: Created: latency-svc-75xss
Dec 31 12:18:16.758: INFO: Got endpoints: latency-svc-75xss [2.415533342s]
Dec 31 12:18:16.860: INFO: Created: latency-svc-lqxtx
Dec 31 12:18:16.887: INFO: Got endpoints: latency-svc-lqxtx [2.309071216s]
Dec 31 12:18:17.024: INFO: Created: latency-svc-72hqj
Dec 31 12:18:17.037: INFO: Got endpoints: latency-svc-72hqj [2.302973191s]
Dec 31 12:18:17.260: INFO: Created: latency-svc-kgcnp
Dec 31 12:18:17.267: INFO: Got endpoints: latency-svc-kgcnp [2.381818353s]
Dec 31 12:18:17.422: INFO: Created: latency-svc-4dkpm
Dec 31 12:18:17.486: INFO: Created: latency-svc-r5rxv
Dec 31 12:18:17.490: INFO: Got endpoints: latency-svc-4dkpm [2.565234101s]
Dec 31 12:18:17.503: INFO: Got endpoints: latency-svc-r5rxv [2.372814292s]
Dec 31 12:18:17.610: INFO: Created: latency-svc-fvfrj
Dec 31 12:18:17.622: INFO: Got endpoints: latency-svc-fvfrj [2.225920477s]
Dec 31 12:18:17.681: INFO: Created: latency-svc-dwcjc
Dec 31 12:18:17.766: INFO: Got endpoints: latency-svc-dwcjc [2.205026578s]
Dec 31 12:18:17.785: INFO: Created: latency-svc-qg694
Dec 31 12:18:17.804: INFO: Got endpoints: latency-svc-qg694 [2.050277235s]
Dec 31 12:18:17.988: INFO: Created: latency-svc-5tqgq
Dec 31 12:18:18.039: INFO: Got endpoints: latency-svc-5tqgq [2.203757882s]
Dec 31 12:18:18.157: INFO: Created: latency-svc-g9cs8
Dec 31 12:18:18.171: INFO: Got endpoints: latency-svc-g9cs8 [2.105013142s]
Dec 31 12:18:18.342: INFO: Created: latency-svc-66w8q
Dec 31 12:18:18.372: INFO: Got endpoints: latency-svc-66w8q [2.207683904s]
Dec 31 12:18:18.609: INFO: Created: latency-svc-zv45r
Dec 31 12:18:18.634: INFO: Got endpoints: latency-svc-zv45r [2.288224155s]
Dec 31 12:18:19.065: INFO: Created: latency-svc-7bmxd
Dec 31 12:18:19.141: INFO: Got endpoints: latency-svc-7bmxd [2.610080354s]
Dec 31 12:18:19.473: INFO: Created: latency-svc-zw6mb
Dec 31 12:18:19.494: INFO: Got endpoints: latency-svc-zw6mb [2.808960102s]
Dec 31 12:18:19.646: INFO: Created: latency-svc-4zsbr
Dec 31 12:18:19.657: INFO: Got endpoints: latency-svc-4zsbr [2.897979326s]
Dec 31 12:18:19.834: INFO: Created: latency-svc-86drw
Dec 31 12:18:19.844: INFO: Got endpoints: latency-svc-86drw [2.956531979s]
Dec 31 12:18:20.068: INFO: Created: latency-svc-4lmg7
Dec 31 12:18:20.072: INFO: Got endpoints: latency-svc-4lmg7 [3.033943486s]
Dec 31 12:18:20.119: INFO: Created: latency-svc-9nvlm
Dec 31 12:18:20.135: INFO: Got endpoints: latency-svc-9nvlm [2.867856348s]
Dec 31 12:18:20.304: INFO: Created: latency-svc-chmmb
Dec 31 12:18:20.320: INFO: Got endpoints: latency-svc-chmmb [2.83025578s]
Dec 31 12:18:20.469: INFO: Created: latency-svc-pdh22
Dec 31 12:18:20.716: INFO: Got endpoints: latency-svc-pdh22 [3.213564909s]
Dec 31 12:18:20.745: INFO: Created: latency-svc-rctr7
Dec 31 12:18:20.845: INFO: Got endpoints: latency-svc-rctr7 [3.222580642s]
Dec 31 12:18:20.899: INFO: Created: latency-svc-dg9s2
Dec 31 12:18:20.914: INFO: Got endpoints: latency-svc-dg9s2 [3.146952351s]
Dec 31 12:18:21.115: INFO: Created: latency-svc-srnk9
Dec 31 12:18:21.432: INFO: Got endpoints: latency-svc-srnk9 [3.627565508s]
Dec 31 12:18:21.456: INFO: Created: latency-svc-2vqcf
Dec 31 12:18:21.467: INFO: Got endpoints: latency-svc-2vqcf [3.42751669s]
Dec 31 12:18:21.628: INFO: Created: latency-svc-9kjw4
Dec 31 12:18:21.650: INFO: Got endpoints: latency-svc-9kjw4 [3.478371438s]
Dec 31 12:18:21.739: INFO: Created: latency-svc-qg5xr
Dec 31 12:18:21.806: INFO: Got endpoints: latency-svc-qg5xr [3.434388885s]
Dec 31 12:18:21.837: INFO: Created: latency-svc-dcj7h
Dec 31 12:18:21.861: INFO: Got endpoints: latency-svc-dcj7h [3.227280239s]
Dec 31 12:18:22.027: INFO: Created: latency-svc-t98k7
Dec 31 12:18:22.046: INFO: Got endpoints: latency-svc-t98k7 [2.904129944s]
Dec 31 12:18:22.211: INFO: Created: latency-svc-xm74s
Dec 31 12:18:22.213: INFO: Got endpoints: latency-svc-xm74s [2.717949385s]
Dec 31 12:18:22.213: INFO: Latencies: [192.733925ms 369.450404ms 412.47003ms 755.022279ms 789.956422ms 962.553591ms 1.043078373s 1.250356201s 1.434994017s 1.534306377s 1.689132384s 1.767975644s 1.859797848s 1.917398613s 1.941032071s 2.042627251s 2.050277235s 2.077732146s 2.105013142s 2.133304375s 2.152772294s 2.170373076s 2.185428607s 2.187352681s 2.187808077s 2.192115583s 2.200572103s 2.201564477s 2.203757882s 2.205026578s 2.207683904s 2.209122403s 2.211356937s 2.214876206s 2.218109361s 2.225920477s 2.22639765s 2.234127014s 2.243703063s 2.244919785s 2.267341408s 2.272044321s 2.27464469s 2.281256505s 2.288224155s 2.28898782s 2.301044345s 2.302973191s 2.305940238s 2.309071216s 2.316685863s 2.319834445s 2.332400095s 2.342634029s 2.347475776s 2.350584987s 2.357738201s 2.371217553s 2.372814292s 2.381818353s 2.385354099s 2.391108593s 2.397491415s 2.409204759s 2.414325863s 2.415533342s 2.419295356s 2.421815809s 2.432178986s 2.433985889s 2.440862016s 2.444375234s 2.446464676s 2.451727439s 2.468511634s 2.468695847s 2.476659593s 2.484641079s 2.492286864s 2.498200298s 2.514835671s 2.519675298s 2.521368511s 2.524108058s 2.533786836s 2.543951697s 2.556131108s 2.560730228s 2.5637869s 2.565234101s 2.567506649s 2.580826077s 2.581105444s 2.588630721s 2.60446021s 2.606048878s 2.608118226s 2.610080354s 2.610507362s 2.614285624s 2.637876521s 2.639601529s 2.650642109s 2.655269825s 2.687050501s 2.696263116s 2.69902344s 2.70730596s 2.717949385s 2.73149944s 2.737262941s 2.743031914s 2.745705218s 2.75175929s 2.764031046s 2.766942413s 2.775054998s 2.782230649s 2.808960102s 2.812064219s 2.814949071s 2.824124756s 2.83025578s 2.846784812s 2.855392767s 2.867856348s 2.897979326s 2.898656672s 2.904129944s 2.925442761s 2.925737535s 2.938762906s 2.950341418s 2.954816275s 2.956531979s 2.963512635s 2.966930926s 2.968760039s 2.971875187s 3.00074047s 3.001951982s 3.02259061s 3.031226505s 3.033943486s 3.040800205s 3.041419303s 3.041438813s 3.045608376s 3.062456257s 3.063538136s 3.077976361s 3.081497784s 3.094410976s 3.115002355s 3.13184332s 3.146952351s 3.150904189s 3.201181463s 3.209651974s 3.213564909s 3.222580642s 3.227280239s 3.233975815s 3.275051221s 3.292650093s 3.309802696s 3.369633046s 3.381558218s 3.412497988s 3.419449035s 3.42751669s 3.434388885s 3.451507629s 3.478371438s 3.486129596s 3.565513258s 3.614476279s 3.626850711s 3.627565508s 3.658117031s 3.72695005s 3.763485975s 3.767484097s 3.825050163s 3.834327805s 3.837268624s 3.843410092s 3.858237451s 3.931471808s 3.95708897s 4.009512595s 4.01877554s 4.021262568s 4.030518217s 4.031294643s 4.035227808s 4.057052347s 4.074285721s 4.098338837s 4.120824205s]
Dec 31 12:18:22.214: INFO: 50 %ile: 2.637876521s
Dec 31 12:18:22.214: INFO: 90 %ile: 3.72695005s
Dec 31 12:18:22.214: INFO: 99 %ile: 4.098338837s
Dec 31 12:18:22.214: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:18:22.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-22rhq" for this suite.
Dec 31 12:19:20.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:19:20.357: INFO: namespace: e2e-tests-svc-latency-22rhq, resource: bindings, ignored listing per whitelist
Dec 31 12:19:20.430: INFO: namespace e2e-tests-svc-latency-22rhq deletion completed in 58.207739516s

• [SLOW TEST:106.022 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:19:20.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 31 12:19:20.934: INFO: Waiting up to 5m0s for pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-6wrxj" to be "success or failure"
Dec 31 12:19:20.944: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.850361ms
Dec 31 12:19:23.126: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19213326s
Dec 31 12:19:25.183: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248671141s
Dec 31 12:19:27.746: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811319079s
Dec 31 12:19:30.163: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22861312s
Dec 31 12:19:32.175: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.240842196s
STEP: Saw pod success
Dec 31 12:19:32.175: INFO: Pod "downward-api-c609399d-2bc7-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:19:32.179: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c609399d-2bc7-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 12:19:32.553: INFO: Waiting for pod downward-api-c609399d-2bc7-11ea-a129-0242ac110005 to disappear
Dec 31 12:19:32.580: INFO: Pod downward-api-c609399d-2bc7-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:19:32.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6wrxj" for this suite.
Dec 31 12:19:38.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:19:38.835: INFO: namespace: e2e-tests-downward-api-6wrxj, resource: bindings, ignored listing per whitelist
Dec 31 12:19:38.946: INFO: namespace e2e-tests-downward-api-6wrxj deletion completed in 6.33712624s

• [SLOW TEST:18.516 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:19:38.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 31 12:19:39.117: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 31 12:19:39.223: INFO: Waiting for terminating namespaces to be deleted...
Dec 31 12:19:39.236: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 31 12:19:39.261: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 31 12:19:39.262: INFO: 	Container coredns ready: true, restart count 0
Dec 31 12:19:39.262: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 31 12:19:39.262: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 12:19:39.262: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 12:19:39.262: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 31 12:19:39.262: INFO: 	Container weave ready: true, restart count 0
Dec 31 12:19:39.262: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 12:19:39.262: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 31 12:19:39.262: INFO: 	Container coredns ready: true, restart count 0
Dec 31 12:19:39.262: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 12:19:39.262: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 31 12:19:39.262: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e573fe05711b9c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:19:40.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nstnn" for this suite.
Dec 31 12:19:46.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:19:46.696: INFO: namespace: e2e-tests-sched-pred-nstnn, resource: bindings, ignored listing per whitelist
Dec 31 12:19:46.745: INFO: namespace e2e-tests-sched-pred-nstnn deletion completed in 6.392186738s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.798 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:19:46.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:19:46.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-7vd2q" to be "success or failure"
Dec 31 12:19:46.892: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.940317ms
Dec 31 12:19:49.024: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152340656s
Dec 31 12:19:51.055: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184204593s
Dec 31 12:19:53.253: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382197281s
Dec 31 12:19:55.269: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39778961s
Dec 31 12:19:57.366: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.494586343s
STEP: Saw pod success
Dec 31 12:19:57.366: INFO: Pod "downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:19:57.397: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:19:57.509: INFO: Waiting for pod downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005 to disappear
Dec 31 12:19:57.525: INFO: Pod downwardapi-volume-d57ed5a7-2bc7-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:19:57.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7vd2q" for this suite.
Dec 31 12:20:03.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:20:03.758: INFO: namespace: e2e-tests-projected-7vd2q, resource: bindings, ignored listing per whitelist
Dec 31 12:20:03.965: INFO: namespace e2e-tests-projected-7vd2q deletion completed in 6.428415039s

• [SLOW TEST:17.219 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:20:03.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-ntpcw/secret-test-dfe10665-2bc7-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:20:04.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-ntpcw" to be "success or failure"
Dec 31 12:20:04.324: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.129826ms
Dec 31 12:20:06.461: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144027872s
Dec 31 12:20:08.487: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170644473s
Dec 31 12:20:10.504: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187056373s
Dec 31 12:20:12.635: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317986396s
Dec 31 12:20:14.904: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.587253373s
STEP: Saw pod success
Dec 31 12:20:14.904: INFO: Pod "pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:20:14.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005 container env-test: 
STEP: delete the pod
Dec 31 12:20:15.277: INFO: Waiting for pod pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005 to disappear
Dec 31 12:20:15.608: INFO: Pod pod-configmaps-dfe239bb-2bc7-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:20:15.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ntpcw" for this suite.
Dec 31 12:20:21.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:20:21.771: INFO: namespace: e2e-tests-secrets-ntpcw, resource: bindings, ignored listing per whitelist
Dec 31 12:20:21.892: INFO: namespace e2e-tests-secrets-ntpcw deletion completed in 6.260841253s

• [SLOW TEST:17.926 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:20:21.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 31 12:20:32.764: INFO: Successfully updated pod "annotationupdateea80bae0-2bc7-11ea-a129-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:20:34.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r9l7c" for this suite.
Dec 31 12:20:59.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:20:59.105: INFO: namespace: e2e-tests-downward-api-r9l7c, resource: bindings, ignored listing per whitelist
Dec 31 12:20:59.354: INFO: namespace e2e-tests-downward-api-r9l7c deletion completed in 24.435247964s

• [SLOW TEST:37.461 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:20:59.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:21:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bzjng" for this suite.
Dec 31 12:21:55.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:21:55.764: INFO: namespace: e2e-tests-kubelet-test-bzjng, resource: bindings, ignored listing per whitelist
Dec 31 12:21:55.899: INFO: namespace e2e-tests-kubelet-test-bzjng deletion completed in 46.206300247s

• [SLOW TEST:56.544 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:21:55.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 31 12:21:56.829: INFO: created pod pod-service-account-defaultsa
Dec 31 12:21:56.829: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 31 12:21:56.854: INFO: created pod pod-service-account-mountsa
Dec 31 12:21:56.854: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 31 12:21:56.880: INFO: created pod pod-service-account-nomountsa
Dec 31 12:21:56.880: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 31 12:21:56.949: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 31 12:21:56.950: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 31 12:21:57.208: INFO: created pod pod-service-account-mountsa-mountspec
Dec 31 12:21:57.208: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 31 12:21:57.268: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 31 12:21:57.268: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 31 12:21:58.193: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 31 12:21:58.194: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 31 12:21:58.480: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 31 12:21:58.480: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 31 12:21:59.614: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 31 12:21:59.615: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:21:59.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-lx2bb" for this suite.
Dec 31 12:22:27.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:22:27.925: INFO: namespace: e2e-tests-svcaccounts-lx2bb, resource: bindings, ignored listing per whitelist
Dec 31 12:22:27.931: INFO: namespace e2e-tests-svcaccounts-lx2bb deletion completed in 27.418928808s

• [SLOW TEST:32.032 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:22:27.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:22:40.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-zfkck" for this suite.
Dec 31 12:22:47.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:22:47.249: INFO: namespace: e2e-tests-kubelet-test-zfkck, resource: bindings, ignored listing per whitelist
Dec 31 12:22:47.405: INFO: namespace e2e-tests-kubelet-test-zfkck deletion completed in 7.009756372s

• [SLOW TEST:19.474 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:22:47.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 31 12:22:47.713: INFO: Waiting up to 5m0s for pod "pod-414727ab-2bc8-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-vhxkd" to be "success or failure"
Dec 31 12:22:47.866: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 152.785508ms
Dec 31 12:22:49.888: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175028148s
Dec 31 12:22:51.902: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188664484s
Dec 31 12:22:54.337: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623730318s
Dec 31 12:22:56.347: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634106078s
Dec 31 12:22:58.410: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.697236764s
STEP: Saw pod success
Dec 31 12:22:58.410: INFO: Pod "pod-414727ab-2bc8-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:22:58.423: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-414727ab-2bc8-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 12:22:58.610: INFO: Waiting for pod pod-414727ab-2bc8-11ea-a129-0242ac110005 to disappear
Dec 31 12:22:58.626: INFO: Pod pod-414727ab-2bc8-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:22:58.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vhxkd" for this suite.
Dec 31 12:23:04.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:23:05.000: INFO: namespace: e2e-tests-emptydir-vhxkd, resource: bindings, ignored listing per whitelist
Dec 31 12:23:05.016: INFO: namespace e2e-tests-emptydir-vhxkd deletion completed in 6.377282632s

• [SLOW TEST:17.611 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:23:05.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:23:05.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-pxjds" to be "success or failure"
Dec 31 12:23:05.227: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 69.936276ms
Dec 31 12:23:07.242: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085099866s
Dec 31 12:23:09.259: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102459894s
Dec 31 12:23:11.601: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444231505s
Dec 31 12:23:13.649: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491934802s
Dec 31 12:23:15.678: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.520859306s
STEP: Saw pod success
Dec 31 12:23:15.678: INFO: Pod "downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:23:15.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:23:15.875: INFO: Waiting for pod downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005 to disappear
Dec 31 12:23:15.901: INFO: Pod downwardapi-volume-4bae50a7-2bc8-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:23:15.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pxjds" for this suite.
Dec 31 12:23:22.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:23:22.094: INFO: namespace: e2e-tests-downward-api-pxjds, resource: bindings, ignored listing per whitelist
Dec 31 12:23:22.327: INFO: namespace e2e-tests-downward-api-pxjds deletion completed in 6.359010664s

• [SLOW TEST:17.310 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:23:22.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-kw6bw
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-kw6bw
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-kw6bw
Dec 31 12:23:22.925: INFO: Found 0 stateful pods, waiting for 1
Dec 31 12:23:32.939: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 31 12:23:32.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 12:23:33.535: INFO: stderr: ""
Dec 31 12:23:33.535: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 12:23:33.535: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 12:23:33.550: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 31 12:23:43.568: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 12:23:43.568: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 12:23:43.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999716s
Dec 31 12:23:44.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980058377s
Dec 31 12:23:45.667: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.936991984s
Dec 31 12:23:46.681: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.925198368s
Dec 31 12:23:47.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.91107601s
Dec 31 12:23:48.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.899288704s
Dec 31 12:23:49.743: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.870834751s
Dec 31 12:23:50.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.849645721s
Dec 31 12:23:51.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.840221212s
Dec 31 12:23:52.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 826.097309ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-kw6bw
Dec 31 12:23:53.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 12:23:54.323: INFO: stderr: ""
Dec 31 12:23:54.324: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 12:23:54.324: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 12:23:54.338: INFO: Found 1 stateful pods, waiting for 3
Dec 31 12:24:04.368: INFO: Found 2 stateful pods, waiting for 3
Dec 31 12:24:14.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:24:14.352: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:24:14.352: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 12:24:24.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:24:24.357: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:24:24.357: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 31 12:24:24.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 12:24:25.037: INFO: stderr: ""
Dec 31 12:24:25.037: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 12:24:25.037: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 12:24:25.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 12:24:25.642: INFO: stderr: ""
Dec 31 12:24:25.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 12:24:25.643: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 12:24:25.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 12:24:26.310: INFO: stderr: ""
Dec 31 12:24:26.310: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 12:24:26.310: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 12:24:26.310: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 12:24:26.328: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 31 12:24:36.358: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 12:24:36.358: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 12:24:36.358: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 12:24:36.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999515s
Dec 31 12:24:37.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973895317s
Dec 31 12:24:38.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.943424508s
Dec 31 12:24:39.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.907341442s
Dec 31 12:24:40.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.876634837s
Dec 31 12:24:41.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.851684323s
Dec 31 12:24:42.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.83505931s
Dec 31 12:24:43.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.753863236s
Dec 31 12:24:44.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.730894025s
Dec 31 12:24:45.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 508.75992ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-kw6bw
Dec 31 12:24:46.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 12:24:47.506: INFO: stderr: ""
Dec 31 12:24:47.506: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 12:24:47.506: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 12:24:47.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 12:24:48.113: INFO: stderr: ""
Dec 31 12:24:48.113: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 12:24:48.113: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 12:24:48.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kw6bw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 12:24:48.944: INFO: stderr: ""
Dec 31 12:24:48.944: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 12:24:48.944: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 12:24:48.944: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 31 12:25:29.037: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kw6bw
Dec 31 12:25:29.050: INFO: Scaling statefulset ss to 0
Dec 31 12:25:29.066: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 12:25:29.070: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:25:29.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-kw6bw" for this suite.
Dec 31 12:25:37.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:25:37.210: INFO: namespace: e2e-tests-statefulset-kw6bw, resource: bindings, ignored listing per whitelist
Dec 31 12:25:37.372: INFO: namespace e2e-tests-statefulset-kw6bw deletion completed in 8.265100158s

• [SLOW TEST:135.045 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:25:37.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 31 12:25:47.783: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a693cf47-2bc8-11ea-a129-0242ac110005,GenerateName:,Namespace:e2e-tests-events-sjvjx,SelfLink:/api/v1/namespaces/e2e-tests-events-sjvjx/pods/send-events-a693cf47-2bc8-11ea-a129-0242ac110005,UID:a69504e4-2bc8-11ea-a994-fa163e34d433,ResourceVersion:16685279,Generation:0,CreationTimestamp:2019-12-31 12:25:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 637995737,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nkfdc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nkfdc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nkfdc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008bba30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008bba50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:25:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:25:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:25:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 12:25:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-31 12:25:37 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-31 12:25:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://89cdbeb85e22776298618dcecc479767717ad27d5e391a1b7bdf4d40b346d93f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 31 12:25:49.800: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 31 12:25:51.831: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:25:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-sjvjx" for this suite.
Dec 31 12:26:33.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:26:34.207: INFO: namespace: e2e-tests-events-sjvjx, resource: bindings, ignored listing per whitelist
Dec 31 12:26:34.309: INFO: namespace e2e-tests-events-sjvjx deletion completed in 42.412195322s

• [SLOW TEST:56.937 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:26:34.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 12:26:34.644: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 31 12:26:34.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zh9dt/daemonsets","resourceVersion":"16685354"},"items":null}

Dec 31 12:26:34.664: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zh9dt/pods","resourceVersion":"16685354"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:26:34.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zh9dt" for this suite.
Dec 31 12:26:40.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:26:40.776: INFO: namespace: e2e-tests-daemonsets-zh9dt, resource: bindings, ignored listing per whitelist
Dec 31 12:26:40.870: INFO: namespace e2e-tests-daemonsets-zh9dt deletion completed in 6.149573142s

S [SKIPPING] [6.561 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 31 12:26:34.644: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:26:40.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 31 12:26:42.118: INFO: Pod name wrapped-volume-race-ccf2a9b4-2bc8-11ea-a129-0242ac110005: Found 0 pods out of 5
Dec 31 12:26:47.158: INFO: Pod name wrapped-volume-race-ccf2a9b4-2bc8-11ea-a129-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ccf2a9b4-2bc8-11ea-a129-0242ac110005 in namespace e2e-tests-emptydir-wrapper-lwchn, will wait for the garbage collector to delete the pods
Dec 31 12:28:39.377: INFO: Deleting ReplicationController wrapped-volume-race-ccf2a9b4-2bc8-11ea-a129-0242ac110005 took: 37.592633ms
Dec 31 12:28:39.678: INFO: Terminating ReplicationController wrapped-volume-race-ccf2a9b4-2bc8-11ea-a129-0242ac110005 pods took: 300.755176ms
STEP: Creating RC which spawns configmap-volume pods
Dec 31 12:29:33.394: INFO: Pod name wrapped-volume-race-3307c705-2bc9-11ea-a129-0242ac110005: Found 0 pods out of 5
Dec 31 12:29:38.440: INFO: Pod name wrapped-volume-race-3307c705-2bc9-11ea-a129-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3307c705-2bc9-11ea-a129-0242ac110005 in namespace e2e-tests-emptydir-wrapper-lwchn, will wait for the garbage collector to delete the pods
Dec 31 12:32:04.914: INFO: Deleting ReplicationController wrapped-volume-race-3307c705-2bc9-11ea-a129-0242ac110005 took: 46.341279ms
Dec 31 12:32:05.314: INFO: Terminating ReplicationController wrapped-volume-race-3307c705-2bc9-11ea-a129-0242ac110005 pods took: 400.795585ms
STEP: Creating RC which spawns configmap-volume pods
Dec 31 12:32:54.043: INFO: Pod name wrapped-volume-race-aa8f4b24-2bc9-11ea-a129-0242ac110005: Found 0 pods out of 5
Dec 31 12:32:59.171: INFO: Pod name wrapped-volume-race-aa8f4b24-2bc9-11ea-a129-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-aa8f4b24-2bc9-11ea-a129-0242ac110005 in namespace e2e-tests-emptydir-wrapper-lwchn, will wait for the garbage collector to delete the pods
Dec 31 12:35:21.321: INFO: Deleting ReplicationController wrapped-volume-race-aa8f4b24-2bc9-11ea-a129-0242ac110005 took: 52.829884ms
Dec 31 12:35:21.821: INFO: Terminating ReplicationController wrapped-volume-race-aa8f4b24-2bc9-11ea-a129-0242ac110005 pods took: 500.759214ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:36:06.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lwchn" for this suite.
Dec 31 12:36:16.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:36:17.027: INFO: namespace: e2e-tests-emptydir-wrapper-lwchn, resource: bindings, ignored listing per whitelist
Dec 31 12:36:17.136: INFO: namespace e2e-tests-emptydir-wrapper-lwchn deletion completed in 10.216409191s

• [SLOW TEST:576.266 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:36:17.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7dfkj
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 12:36:17.444: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 12:36:51.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-7dfkj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:36:51.834: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:36:52.369: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:36:52.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7dfkj" for this suite.
Dec 31 12:37:20.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:37:20.775: INFO: namespace: e2e-tests-pod-network-test-7dfkj, resource: bindings, ignored listing per whitelist
Dec 31 12:37:20.877: INFO: namespace e2e-tests-pod-network-test-7dfkj deletion completed in 28.490662491s

• [SLOW TEST:63.740 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:37:20.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:37:21.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-bhz2q" to be "success or failure"
Dec 31 12:37:21.507: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.421432ms
Dec 31 12:37:23.910: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460419487s
Dec 31 12:37:25.924: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475028285s
Dec 31 12:37:28.041: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591379158s
Dec 31 12:37:30.236: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.786225509s
Dec 31 12:37:32.251: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.801758608s
STEP: Saw pod success
Dec 31 12:37:32.251: INFO: Pod "downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:37:32.257: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:37:32.437: INFO: Waiting for pod downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005 to disappear
Dec 31 12:37:32.461: INFO: Pod downwardapi-volume-49ebe077-2bca-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:37:32.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bhz2q" for this suite.
Dec 31 12:37:38.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:37:38.692: INFO: namespace: e2e-tests-downward-api-bhz2q, resource: bindings, ignored listing per whitelist
Dec 31 12:37:38.756: INFO: namespace e2e-tests-downward-api-bhz2q deletion completed in 6.273553941s

• [SLOW TEST:17.878 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:37:38.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-547fe0e2-2bca-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:37:38.948: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-mjp8t" to be "success or failure"
Dec 31 12:37:38.953: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.63544ms
Dec 31 12:37:40.964: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01552733s
Dec 31 12:37:42.986: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037769326s
Dec 31 12:37:45.002: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05377125s
Dec 31 12:37:47.016: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067599697s
Dec 31 12:37:49.030: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081316879s
STEP: Saw pod success
Dec 31 12:37:49.030: INFO: Pod "pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:37:49.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 12:37:49.703: INFO: Waiting for pod pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005 to disappear
Dec 31 12:37:49.722: INFO: Pod pod-projected-configmaps-5480d6a3-2bca-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:37:49.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mjp8t" for this suite.
Dec 31 12:37:55.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:37:56.037: INFO: namespace: e2e-tests-projected-mjp8t, resource: bindings, ignored listing per whitelist
Dec 31 12:37:56.065: INFO: namespace e2e-tests-projected-mjp8t deletion completed in 6.258152645s

• [SLOW TEST:17.309 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:37:56.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 31 12:40:58.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:40:58.036: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:00.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:00.099: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:02.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:02.066: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:04.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:04.129: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:06.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:06.134: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:08.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:08.167: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:10.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:10.056: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:12.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:12.059: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:14.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:14.067: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:16.037: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:16.051: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:18.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:18.048: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:20.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:20.049: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:22.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:22.053: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:24.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:24.066: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:26.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:26.053: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:28.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:28.050: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:30.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:30.059: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:32.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:32.068: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:34.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:34.071: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:36.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:36.050: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:38.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:38.053: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:40.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:40.050: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:42.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:42.053: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 12:41:44.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 12:41:44.061: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:41:44.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gmjgc" for this suite.
Dec 31 12:42:08.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:42:08.299: INFO: namespace: e2e-tests-container-lifecycle-hook-gmjgc, resource: bindings, ignored listing per whitelist
Dec 31 12:42:08.303: INFO: namespace e2e-tests-container-lifecycle-hook-gmjgc deletion completed in 24.225504857s

• [SLOW TEST:252.238 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:42:08.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 31 12:42:08.529: INFO: Waiting up to 5m0s for pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005" in namespace "e2e-tests-var-expansion-hvsbn" to be "success or failure"
Dec 31 12:42:08.712: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 183.338143ms
Dec 31 12:42:10.724: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195304458s
Dec 31 12:42:12.760: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231081628s
Dec 31 12:42:14.775: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246051928s
Dec 31 12:42:16.844: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314665182s
Dec 31 12:42:18.862: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332620261s
STEP: Saw pod success
Dec 31 12:42:18.862: INFO: Pod "var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:42:18.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 12:42:19.045: INFO: Waiting for pod var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005 to disappear
Dec 31 12:42:19.061: INFO: Pod var-expansion-f52d9b27-2bca-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:42:19.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-hvsbn" for this suite.
Dec 31 12:42:25.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:42:25.317: INFO: namespace: e2e-tests-var-expansion-hvsbn, resource: bindings, ignored listing per whitelist
Dec 31 12:42:25.328: INFO: namespace e2e-tests-var-expansion-hvsbn deletion completed in 6.232798555s

• [SLOW TEST:17.024 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:42:25.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dlptf
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 31 12:42:25.729: INFO: Found 0 stateful pods, waiting for 3
Dec 31 12:42:35.744: INFO: Found 2 stateful pods, waiting for 3
Dec 31 12:42:45.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:42:45.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:42:45.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 12:42:55.750: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:42:55.750: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:42:55.750: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 31 12:42:55.803: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 31 12:43:05.916: INFO: Updating stateful set ss2
Dec 31 12:43:05.933: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 12:43:15.954: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 31 12:43:26.533: INFO: Found 2 stateful pods, waiting for 3
Dec 31 12:43:36.572: INFO: Found 2 stateful pods, waiting for 3
Dec 31 12:43:46.588: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:43:46.589: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:43:46.589: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 12:43:56.561: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:43:56.561: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 12:43:56.561: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 31 12:43:56.621: INFO: Updating stateful set ss2
Dec 31 12:43:56.658: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 12:44:06.684: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 12:44:16.752: INFO: Updating stateful set ss2
Dec 31 12:44:16.769: INFO: Waiting for StatefulSet e2e-tests-statefulset-dlptf/ss2 to complete update
Dec 31 12:44:16.769: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 12:44:26.801: INFO: Waiting for StatefulSet e2e-tests-statefulset-dlptf/ss2 to complete update
Dec 31 12:44:26.801: INFO: Waiting for Pod e2e-tests-statefulset-dlptf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 12:44:36.841: INFO: Waiting for StatefulSet e2e-tests-statefulset-dlptf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 31 12:44:46.793: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dlptf
Dec 31 12:44:46.796: INFO: Scaling statefulset ss2 to 0
Dec 31 12:45:26.873: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 12:45:26.884: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:45:26.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dlptf" for this suite.
Dec 31 12:45:35.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:45:35.454: INFO: namespace: e2e-tests-statefulset-dlptf, resource: bindings, ignored listing per whitelist
Dec 31 12:45:35.617: INFO: namespace e2e-tests-statefulset-dlptf deletion completed in 8.403528181s

• [SLOW TEST:190.289 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:45:35.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-70be8f4b-2bcb-11ea-a129-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-70be9005-2bcb-11ea-a129-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-70be8f4b-2bcb-11ea-a129-0242ac110005
STEP: Updating configmap cm-test-opt-upd-70be9005-2bcb-11ea-a129-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-70be905d-2bcb-11ea-a129-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:45:50.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-74gm6" for this suite.
Dec 31 12:46:16.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:46:16.456: INFO: namespace: e2e-tests-projected-74gm6, resource: bindings, ignored listing per whitelist
Dec 31 12:46:16.752: INFO: namespace e2e-tests-projected-74gm6 deletion completed in 26.42573286s

• [SLOW TEST:41.135 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:46:16.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 31 12:46:17.063: INFO: Waiting up to 5m0s for pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005" in namespace "e2e-tests-containers-7s5rh" to be "success or failure"
Dec 31 12:46:17.090: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.061658ms
Dec 31 12:46:19.103: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040623635s
Dec 31 12:46:21.119: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056409904s
Dec 31 12:46:23.128: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065614511s
Dec 31 12:46:25.180: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.116713264s
Dec 31 12:46:27.199: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136282705s
STEP: Saw pod success
Dec 31 12:46:27.199: INFO: Pod "client-containers-893f5731-2bcb-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:46:27.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-893f5731-2bcb-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 12:46:27.945: INFO: Waiting for pod client-containers-893f5731-2bcb-11ea-a129-0242ac110005 to disappear
Dec 31 12:46:28.207: INFO: Pod client-containers-893f5731-2bcb-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:46:28.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7s5rh" for this suite.
Dec 31 12:46:34.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:46:34.425: INFO: namespace: e2e-tests-containers-7s5rh, resource: bindings, ignored listing per whitelist
Dec 31 12:46:34.458: INFO: namespace e2e-tests-containers-7s5rh deletion completed in 6.230953802s

• [SLOW TEST:17.706 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:46:34.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 31 12:46:34.651: INFO: Waiting up to 5m0s for pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005" in namespace "e2e-tests-downward-api-n62sj" to be "success or failure"
Dec 31 12:46:34.656: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552871ms
Dec 31 12:46:36.664: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012564306s
Dec 31 12:46:38.678: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027010019s
Dec 31 12:46:40.698: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04646827s
Dec 31 12:46:42.731: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079380947s
Dec 31 12:46:44.763: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111337539s
STEP: Saw pod success
Dec 31 12:46:44.763: INFO: Pod "downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:46:44.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 31 12:46:45.121: INFO: Waiting for pod downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005 to disappear
Dec 31 12:46:45.142: INFO: Pod downward-api-93ce4d6a-2bcb-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:46:45.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n62sj" for this suite.
Dec 31 12:46:51.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:46:51.432: INFO: namespace: e2e-tests-downward-api-n62sj, resource: bindings, ignored listing per whitelist
Dec 31 12:46:51.470: INFO: namespace e2e-tests-downward-api-n62sj deletion completed in 6.316180547s

• [SLOW TEST:17.011 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:46:51.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 31 12:46:52.134: INFO: Waiting up to 5m0s for pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq" in namespace "e2e-tests-svcaccounts-ftxcj" to be "success or failure"
Dec 31 12:46:52.254: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 119.472661ms
Dec 31 12:46:54.328: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194166676s
Dec 31 12:46:56.367: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232373139s
Dec 31 12:46:58.383: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24910736s
Dec 31 12:47:00.502: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368323551s
Dec 31 12:47:02.787: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652473945s
Dec 31 12:47:04.801: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.666825002s
Dec 31 12:47:06.812: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.678266557s
STEP: Saw pod success
Dec 31 12:47:06.812: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq" satisfied condition "success or failure"
Dec 31 12:47:06.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq container token-test: 
STEP: delete the pod
Dec 31 12:47:07.143: INFO: Waiting for pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq to disappear
Dec 31 12:47:07.351: INFO: Pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-g7thq no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 31 12:47:07.389: INFO: Waiting up to 5m0s for pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn" in namespace "e2e-tests-svcaccounts-ftxcj" to be "success or failure"
Dec 31 12:47:07.415: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 25.42125ms
Dec 31 12:47:09.435: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045528775s
Dec 31 12:47:11.446: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056063863s
Dec 31 12:47:13.653: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263453759s
Dec 31 12:47:16.286: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.896899901s
Dec 31 12:47:18.933: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.543191513s
Dec 31 12:47:20.948: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.558089541s
Dec 31 12:47:22.965: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.575123892s
STEP: Saw pod success
Dec 31 12:47:22.965: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn" satisfied condition "success or failure"
Dec 31 12:47:22.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn container root-ca-test: 
STEP: delete the pod
Dec 31 12:47:23.408: INFO: Waiting for pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn to disappear
Dec 31 12:47:23.429: INFO: Pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-nfspn no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 31 12:47:23.455: INFO: Waiting up to 5m0s for pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw" in namespace "e2e-tests-svcaccounts-ftxcj" to be "success or failure"
Dec 31 12:47:23.521: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 65.744757ms
Dec 31 12:47:26.016: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56036538s
Dec 31 12:47:28.068: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612850163s
Dec 31 12:47:30.315: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.860270721s
Dec 31 12:47:32.461: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005498529s
Dec 31 12:47:34.621: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.16599975s
Dec 31 12:47:36.695: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Running", Reason="", readiness=false. Elapsed: 13.239816234s
Dec 31 12:47:38.747: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.291599622s
STEP: Saw pod success
Dec 31 12:47:38.747: INFO: Pod "pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw" satisfied condition "success or failure"
Dec 31 12:47:38.757: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw container namespace-test: 
STEP: delete the pod
Dec 31 12:47:38.924: INFO: Waiting for pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw to disappear
Dec 31 12:47:38.936: INFO: Pod pod-service-account-9e3725aa-2bcb-11ea-a129-0242ac110005-2slzw no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:47:38.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-ftxcj" for this suite.
Dec 31 12:47:46.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:47:47.151: INFO: namespace: e2e-tests-svcaccounts-ftxcj, resource: bindings, ignored listing per whitelist
Dec 31 12:47:47.156: INFO: namespace e2e-tests-svcaccounts-ftxcj deletion completed in 8.212093301s

• [SLOW TEST:55.685 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:47:47.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-bf1fe0df-2bcb-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:47:47.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-jh5jn" to be "success or failure"
Dec 31 12:47:47.339: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.82289ms
Dec 31 12:47:49.366: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037151902s
Dec 31 12:47:51.408: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079270867s
Dec 31 12:47:53.523: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193440726s
Dec 31 12:47:55.533: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.203831614s
STEP: Saw pod success
Dec 31 12:47:55.533: INFO: Pod "pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:47:55.537: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 12:47:55.671: INFO: Waiting for pod pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005 to disappear
Dec 31 12:47:55.675: INFO: Pod pod-configmaps-bf20ab0f-2bcb-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:47:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jh5jn" for this suite.
Dec 31 12:48:01.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:48:01.939: INFO: namespace: e2e-tests-configmap-jh5jn, resource: bindings, ignored listing per whitelist
Dec 31 12:48:01.975: INFO: namespace e2e-tests-configmap-jh5jn deletion completed in 6.292598325s

• [SLOW TEST:14.819 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:48:01.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c80402ca-2bcb-11ea-a129-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:48:16.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4896q" for this suite.
Dec 31 12:48:40.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:48:40.878: INFO: namespace: e2e-tests-configmap-4896q, resource: bindings, ignored listing per whitelist
Dec 31 12:48:40.926: INFO: namespace e2e-tests-configmap-4896q deletion completed in 24.356288998s

• [SLOW TEST:38.951 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:48:40.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rp9vr
Dec 31 12:48:51.251: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rp9vr
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 12:48:51.256: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:52:52.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rp9vr" for this suite.
Dec 31 12:52:59.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:52:59.212: INFO: namespace: e2e-tests-container-probe-rp9vr, resource: bindings, ignored listing per whitelist
Dec 31 12:52:59.255: INFO: namespace e2e-tests-container-probe-rp9vr deletion completed in 6.310202432s

• [SLOW TEST:258.328 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:52:59.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-7934d1b1-2bcc-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:52:59.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-4zrbg" to be "success or failure"
Dec 31 12:52:59.562: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32431ms
Dec 31 12:53:01.572: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018294322s
Dec 31 12:53:03.592: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038308095s
Dec 31 12:53:05.950: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396220399s
Dec 31 12:53:07.987: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433303176s
Dec 31 12:53:09.997: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443519964s
STEP: Saw pod success
Dec 31 12:53:09.997: INFO: Pod "pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:53:10.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 12:53:10.407: INFO: Waiting for pod pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005 to disappear
Dec 31 12:53:10.451: INFO: Pod pod-configmaps-7938ba35-2bcc-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:53:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4zrbg" for this suite.
Dec 31 12:53:16.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:53:17.005: INFO: namespace: e2e-tests-configmap-4zrbg, resource: bindings, ignored listing per whitelist
Dec 31 12:53:17.040: INFO: namespace e2e-tests-configmap-4zrbg deletion completed in 6.581041046s

• [SLOW TEST:17.785 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:53:17.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 31 12:53:17.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-x2bpb" to be "success or failure"
Dec 31 12:53:17.699: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 237.972816ms
Dec 31 12:53:19.715: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253983221s
Dec 31 12:53:21.740: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278379398s
Dec 31 12:53:23.853: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392035237s
Dec 31 12:53:25.948: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487055671s
STEP: Saw pod success
Dec 31 12:53:25.948: INFO: Pod "downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:53:25.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005 container client-container: 
STEP: delete the pod
Dec 31 12:53:26.039: INFO: Waiting for pod downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005 to disappear
Dec 31 12:53:26.142: INFO: Pod downwardapi-volume-83e3e47c-2bcc-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:53:26.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x2bpb" for this suite.
Dec 31 12:53:34.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:53:34.302: INFO: namespace: e2e-tests-projected-x2bpb, resource: bindings, ignored listing per whitelist
Dec 31 12:53:34.335: INFO: namespace e2e-tests-projected-x2bpb deletion completed in 8.174565216s

• [SLOW TEST:17.294 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:53:34.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2m759
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 12:53:34.538: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 12:54:08.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2m759 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 12:54:08.965: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 12:54:09.591: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:54:09.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2m759" for this suite.
Dec 31 12:54:39.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:54:39.803: INFO: namespace: e2e-tests-pod-network-test-2m759, resource: bindings, ignored listing per whitelist
Dec 31 12:54:39.994: INFO: namespace e2e-tests-pod-network-test-2m759 deletion completed in 30.384990061s

• [SLOW TEST:65.658 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:54:39.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 31 12:54:40.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:54:44.245: INFO: stderr: ""
Dec 31 12:54:44.245: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 12:54:44.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:54:44.378: INFO: stderr: ""
Dec 31 12:54:44.378: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Dec 31 12:54:49.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:54:51.152: INFO: stderr: ""
Dec 31 12:54:51.153: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
Dec 31 12:54:51.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:54:51.453: INFO: stderr: ""
Dec 31 12:54:51.454: INFO: stdout: ""
Dec 31 12:54:51.454: INFO: update-demo-nautilus-2z7n4 is created but not running
Dec 31 12:54:56.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:54:58.259: INFO: stderr: ""
Dec 31 12:54:58.260: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
Dec 31 12:54:58.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:00.475: INFO: stderr: ""
Dec 31 12:55:00.475: INFO: stdout: ""
Dec 31 12:55:00.476: INFO: update-demo-nautilus-2z7n4 is created but not running
Dec 31 12:55:05.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:05.646: INFO: stderr: ""
Dec 31 12:55:05.646: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
Dec 31 12:55:05.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:05.828: INFO: stderr: ""
Dec 31 12:55:05.828: INFO: stdout: "true"
Dec 31 12:55:05.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:05.980: INFO: stderr: ""
Dec 31 12:55:05.980: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:05.980: INFO: validating pod update-demo-nautilus-2z7n4
Dec 31 12:55:06.211: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:06.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:06.211: INFO: update-demo-nautilus-2z7n4 is verified up and running
Dec 31 12:55:06.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcfcp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:06.409: INFO: stderr: ""
Dec 31 12:55:06.409: INFO: stdout: "true"
Dec 31 12:55:06.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcfcp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:06.656: INFO: stderr: ""
Dec 31 12:55:06.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:06.656: INFO: validating pod update-demo-nautilus-pcfcp
Dec 31 12:55:06.670: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:06.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:06.671: INFO: update-demo-nautilus-pcfcp is verified up and running
STEP: scaling down the replication controller
Dec 31 12:55:06.673: INFO: scanned /root for discovery docs: 
Dec 31 12:55:06.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:08.122: INFO: stderr: ""
Dec 31 12:55:08.122: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 12:55:08.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:08.316: INFO: stderr: ""
Dec 31 12:55:08.316: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 31 12:55:13.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:13.555: INFO: stderr: ""
Dec 31 12:55:13.555: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 31 12:55:18.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:18.941: INFO: stderr: ""
Dec 31 12:55:18.941: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-pcfcp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 31 12:55:23.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:24.206: INFO: stderr: ""
Dec 31 12:55:24.206: INFO: stdout: "update-demo-nautilus-2z7n4 "
Dec 31 12:55:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:24.319: INFO: stderr: ""
Dec 31 12:55:24.319: INFO: stdout: "true"
Dec 31 12:55:24.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:24.457: INFO: stderr: ""
Dec 31 12:55:24.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:24.457: INFO: validating pod update-demo-nautilus-2z7n4
Dec 31 12:55:24.468: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:24.468: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:24.468: INFO: update-demo-nautilus-2z7n4 is verified up and running
STEP: scaling up the replication controller
Dec 31 12:55:24.472: INFO: scanned /root for discovery docs: 
Dec 31 12:55:24.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:25.940: INFO: stderr: ""
Dec 31 12:55:25.940: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 31 12:55:25.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:26.154: INFO: stderr: ""
Dec 31 12:55:26.154: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-sqgc6 "
Dec 31 12:55:26.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:26.297: INFO: stderr: ""
Dec 31 12:55:26.297: INFO: stdout: "true"
Dec 31 12:55:26.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:26.579: INFO: stderr: ""
Dec 31 12:55:26.579: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:26.579: INFO: validating pod update-demo-nautilus-2z7n4
Dec 31 12:55:26.689: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:26.690: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:26.690: INFO: update-demo-nautilus-2z7n4 is verified up and running
Dec 31 12:55:26.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqgc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:26.856: INFO: stderr: ""
Dec 31 12:55:26.856: INFO: stdout: ""
Dec 31 12:55:26.856: INFO: update-demo-nautilus-sqgc6 is created but not running
Dec 31 12:55:31.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:32.282: INFO: stderr: ""
Dec 31 12:55:32.282: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-sqgc6 "
Dec 31 12:55:32.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:32.518: INFO: stderr: ""
Dec 31 12:55:32.518: INFO: stdout: "true"
Dec 31 12:55:32.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:32.757: INFO: stderr: ""
Dec 31 12:55:32.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:32.757: INFO: validating pod update-demo-nautilus-2z7n4
Dec 31 12:55:32.995: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:32.995: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:32.995: INFO: update-demo-nautilus-2z7n4 is verified up and running
Dec 31 12:55:32.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqgc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:33.318: INFO: stderr: ""
Dec 31 12:55:33.319: INFO: stdout: ""
Dec 31 12:55:33.319: INFO: update-demo-nautilus-sqgc6 is created but not running
Dec 31 12:55:38.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:38.602: INFO: stderr: ""
Dec 31 12:55:38.602: INFO: stdout: "update-demo-nautilus-2z7n4 update-demo-nautilus-sqgc6 "
Dec 31 12:55:38.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:38.724: INFO: stderr: ""
Dec 31 12:55:38.724: INFO: stdout: "true"
Dec 31 12:55:38.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2z7n4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:38.841: INFO: stderr: ""
Dec 31 12:55:38.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:38.841: INFO: validating pod update-demo-nautilus-2z7n4
Dec 31 12:55:38.858: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:38.859: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:38.859: INFO: update-demo-nautilus-2z7n4 is verified up and running
Dec 31 12:55:38.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqgc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:39.066: INFO: stderr: ""
Dec 31 12:55:39.066: INFO: stdout: "true"
Dec 31 12:55:39.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqgc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:39.182: INFO: stderr: ""
Dec 31 12:55:39.182: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 31 12:55:39.182: INFO: validating pod update-demo-nautilus-sqgc6
Dec 31 12:55:39.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 31 12:55:39.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 31 12:55:39.206: INFO: update-demo-nautilus-sqgc6 is verified up and running
STEP: using delete to clean up resources
Dec 31 12:55:39.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:39.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 12:55:39.379: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 31 12:55:39.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8cf96'
Dec 31 12:55:39.651: INFO: stderr: "No resources found.\n"
Dec 31 12:55:39.652: INFO: stdout: ""
Dec 31 12:55:39.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8cf96 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 31 12:55:39.816: INFO: stderr: ""
Dec 31 12:55:39.816: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:55:39.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8cf96" for this suite.
Dec 31 12:56:05.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:56:06.024: INFO: namespace: e2e-tests-kubectl-8cf96, resource: bindings, ignored listing per whitelist
Dec 31 12:56:06.064: INFO: namespace e2e-tests-kubectl-8cf96 deletion completed in 26.212384252s

• [SLOW TEST:86.069 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:56:06.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e8924495-2bcc-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:56:06.446: INFO: Waiting up to 5m0s for pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-j85cf" to be "success or failure"
Dec 31 12:56:06.463: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.587131ms
Dec 31 12:56:08.488: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042429158s
Dec 31 12:56:10.528: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082045857s
Dec 31 12:56:12.760: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314774829s
Dec 31 12:56:14.845: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399687429s
Dec 31 12:56:16.875: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.429562548s
Dec 31 12:56:18.903: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.457033344s
Dec 31 12:56:21.887: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.441350805s
STEP: Saw pod success
Dec 31 12:56:21.887: INFO: Pod "pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:56:22.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 12:56:22.869: INFO: Waiting for pod pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005 to disappear
Dec 31 12:56:22.892: INFO: Pod pod-secrets-e893bccb-2bcc-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:56:22.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j85cf" for this suite.
Dec 31 12:56:29.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:56:29.168: INFO: namespace: e2e-tests-secrets-j85cf, resource: bindings, ignored listing per whitelist
Dec 31 12:56:29.242: INFO: namespace e2e-tests-secrets-j85cf deletion completed in 6.223651215s

• [SLOW TEST:23.178 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:56:29.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qmnxl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.216.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.216.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.216.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.216.95_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qmnxl;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qmnxl.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qmnxl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.216.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.216.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.216.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.216.95_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 12:56:43.754: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.765: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.792: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.876: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.891: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.954: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:43.990: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.033: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.054: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.069: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.099: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.134: INFO: Unable to read 10.100.216.95_udp@PTR from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.183: INFO: Unable to read 10.100.216.95_tcp@PTR from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.281: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.303: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.323: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qmnxl from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.331: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.341: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.354: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.388: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.397: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.402: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.406: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.409: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.413: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.417: INFO: Unable to read 10.100.216.95_udp@PTR from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.421: INFO: Unable to read 10.100.216.95_tcp@PTR from pod e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005: the server could not find the requested resource (get pods dns-test-f65fca55-2bcc-11ea-a129-0242ac110005)
Dec 31 12:56:44.421: INFO: Lookups using e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl wheezy_udp@dns-test-service.e2e-tests-dns-qmnxl.svc wheezy_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.216.95_udp@PTR 10.100.216.95_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qmnxl jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl jessie_udp@dns-test-service.e2e-tests-dns-qmnxl.svc jessie_tcp@dns-test-service.e2e-tests-dns-qmnxl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qmnxl.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qmnxl.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.216.95_udp@PTR 10.100.216.95_tcp@PTR]

Dec 31 12:56:49.685: INFO: DNS probes using e2e-tests-dns-qmnxl/dns-test-f65fca55-2bcc-11ea-a129-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:56:50.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-qmnxl" for this suite.
Dec 31 12:57:00.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:57:00.443: INFO: namespace: e2e-tests-dns-qmnxl, resource: bindings, ignored listing per whitelist
Dec 31 12:57:00.468: INFO: namespace e2e-tests-dns-qmnxl deletion completed in 8.604532733s

• [SLOW TEST:31.225 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:57:00.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-09269d38-2bcd-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 12:57:01.054: INFO: Waiting up to 5m0s for pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-bc2hf" to be "success or failure"
Dec 31 12:57:01.242: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 188.517875ms
Dec 31 12:57:03.376: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322584535s
Dec 31 12:57:05.398: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344526741s
Dec 31 12:57:07.479: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425559052s
Dec 31 12:57:09.516: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461998801s
Dec 31 12:57:11.531: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.477397644s
STEP: Saw pod success
Dec 31 12:57:11.531: INFO: Pod "pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:57:11.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 12:57:12.690: INFO: Waiting for pod pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005 to disappear
Dec 31 12:57:12.702: INFO: Pod pod-secrets-0928bde8-2bcd-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:57:12.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bc2hf" for this suite.
Dec 31 12:57:20.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:57:20.898: INFO: namespace: e2e-tests-secrets-bc2hf, resource: bindings, ignored listing per whitelist
Dec 31 12:57:20.998: INFO: namespace e2e-tests-secrets-bc2hf deletion completed in 8.280577604s

• [SLOW TEST:20.529 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:57:20.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-155c142d-2bcd-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 12:57:21.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-dwv66" to be "success or failure"
Dec 31 12:57:21.677: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04777ms
Dec 31 12:57:24.471: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799600805s
Dec 31 12:57:26.493: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.822344224s
Dec 31 12:57:28.507: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.835887892s
Dec 31 12:57:30.534: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862713462s
Dec 31 12:57:32.560: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.888803891s
Dec 31 12:57:34.764: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.092587481s
STEP: Saw pod success
Dec 31 12:57:34.764: INFO: Pod "pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 12:57:34.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 31 12:57:35.184: INFO: Waiting for pod pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005 to disappear
Dec 31 12:57:35.200: INFO: Pod pod-configmaps-155d5949-2bcd-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:57:35.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dwv66" for this suite.
Dec 31 12:57:41.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:57:41.678: INFO: namespace: e2e-tests-configmap-dwv66, resource: bindings, ignored listing per whitelist
Dec 31 12:57:41.683: INFO: namespace e2e-tests-configmap-dwv66 deletion completed in 6.338544365s

• [SLOW TEST:20.685 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:57:41.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 31 12:57:58.145: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:57:58.337: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:00.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:00.393: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:02.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:02.350: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:04.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:04.358: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:06.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:06.359: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:08.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:08.352: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:10.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:10.373: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:12.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:12.352: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:14.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:14.351: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:16.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:16.353: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:18.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:18.352: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:20.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:20.371: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:22.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:22.355: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 31 12:58:24.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 31 12:58:24.346: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:58:24.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-shgrs" for this suite.
Dec 31 12:58:48.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:58:48.702: INFO: namespace: e2e-tests-container-lifecycle-hook-shgrs, resource: bindings, ignored listing per whitelist
Dec 31 12:58:48.729: INFO: namespace e2e-tests-container-lifecycle-hook-shgrs deletion completed in 24.352487393s

• [SLOW TEST:67.046 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:58:48.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 31 12:59:03.842: INFO: Successfully updated pod "labelsupdate498370fd-2bcd-11ea-a129-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:59:06.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dc7lg" for this suite.
Dec 31 12:59:31.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 12:59:31.261: INFO: namespace: e2e-tests-projected-dc7lg, resource: bindings, ignored listing per whitelist
Dec 31 12:59:31.306: INFO: namespace e2e-tests-projected-dc7lg deletion completed in 24.299762813s

• [SLOW TEST:42.577 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 12:59:31.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 31 12:59:44.709: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 12:59:45.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-q5vn7" for this suite.
Dec 31 13:00:28.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:00:28.699: INFO: namespace: e2e-tests-replicaset-q5vn7, resource: bindings, ignored listing per whitelist
Dec 31 13:00:28.976: INFO: namespace e2e-tests-replicaset-q5vn7 deletion completed in 42.945100676s

• [SLOW TEST:57.670 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:00:28.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 31 13:00:29.275: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix429428165/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:00:29.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8kbf7" for this suite.
Dec 31 13:00:37.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:00:37.565: INFO: namespace: e2e-tests-kubectl-8kbf7, resource: bindings, ignored listing per whitelist
Dec 31 13:00:37.834: INFO: namespace e2e-tests-kubectl-8kbf7 deletion completed in 8.396652012s

• [SLOW TEST:8.857 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:00:37.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-8a8c2d8d-2bcd-11ea-a129-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-8a8c2e1a-2bcd-11ea-a129-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8a8c2d8d-2bcd-11ea-a129-0242ac110005
STEP: Updating configmap cm-test-opt-upd-8a8c2e1a-2bcd-11ea-a129-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-8a8c2e3f-2bcd-11ea-a129-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:02:13.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kcjfx" for this suite.
Dec 31 13:02:39.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:02:39.438: INFO: namespace: e2e-tests-configmap-kcjfx, resource: bindings, ignored listing per whitelist
Dec 31 13:02:39.555: INFO: namespace e2e-tests-configmap-kcjfx deletion completed in 26.266540788s

• [SLOW TEST:121.719 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:02:39.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1231 13:03:21.761676       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 13:03:21.761: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:03:21.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ffdxj" for this suite.
Dec 31 13:03:35.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:03:35.906: INFO: namespace: e2e-tests-gc-ffdxj, resource: bindings, ignored listing per whitelist
Dec 31 13:03:35.924: INFO: namespace e2e-tests-gc-ffdxj deletion completed in 14.158363361s

• [SLOW TEST:56.369 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:03:35.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s97nf
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 13:03:37.720: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 13:04:35.109: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-s97nf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:04:35.109: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:04:36.681: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:04:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-s97nf" for this suite.
Dec 31 13:05:02.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:05:03.017: INFO: namespace: e2e-tests-pod-network-test-s97nf, resource: bindings, ignored listing per whitelist
Dec 31 13:05:03.056: INFO: namespace e2e-tests-pod-network-test-s97nf deletion completed in 26.353390924s

• [SLOW TEST:87.132 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:05:03.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 31 13:05:15.475: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:05:44.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-6ngj4" for this suite.
Dec 31 13:05:50.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:05:50.484: INFO: namespace: e2e-tests-namespaces-6ngj4, resource: bindings, ignored listing per whitelist
Dec 31 13:05:50.653: INFO: namespace e2e-tests-namespaces-6ngj4 deletion completed in 6.548344813s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7lnxs" for this suite.
Dec 31 13:05:50.656: INFO: Namespace e2e-tests-nsdeletetest-7lnxs was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-w84vk" for this suite.
Dec 31 13:05:56.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:05:56.807: INFO: namespace: e2e-tests-nsdeletetest-w84vk, resource: bindings, ignored listing per whitelist
Dec 31 13:05:56.816: INFO: namespace e2e-tests-nsdeletetest-w84vk deletion completed in 6.159467755s

• [SLOW TEST:53.760 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:05:56.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xtjw2
Dec 31 13:06:07.180: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xtjw2
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 13:06:07.187: INFO: Initial restart count of pod liveness-http is 0
Dec 31 13:06:30.152: INFO: Restart count of pod e2e-tests-container-probe-xtjw2/liveness-http is now 1 (22.964169993s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:06:30.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-xtjw2" for this suite.
Dec 31 13:06:36.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:06:36.401: INFO: namespace: e2e-tests-container-probe-xtjw2, resource: bindings, ignored listing per whitelist
Dec 31 13:06:36.754: INFO: namespace e2e-tests-container-probe-xtjw2 deletion completed in 6.498619399s

• [SLOW TEST:39.938 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:06:36.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-6072c867-2bce-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 13:06:37.016: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005" in namespace "e2e-tests-projected-rnstg" to be "success or failure"
Dec 31 13:06:37.052: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.085812ms
Dec 31 13:06:39.523: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507442135s
Dec 31 13:06:41.534: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.518011835s
Dec 31 13:06:43.546: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.530766239s
Dec 31 13:06:45.977: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.961171011s
Dec 31 13:06:48.170: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.154104505s
STEP: Saw pod success
Dec 31 13:06:48.170: INFO: Pod "pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:06:48.178: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 13:06:48.592: INFO: Waiting for pod pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005 to disappear
Dec 31 13:06:48.610: INFO: Pod pod-projected-secrets-60793b36-2bce-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:06:48.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rnstg" for this suite.
Dec 31 13:06:56.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:06:56.924: INFO: namespace: e2e-tests-projected-rnstg, resource: bindings, ignored listing per whitelist
Dec 31 13:06:56.943: INFO: namespace e2e-tests-projected-rnstg deletion completed in 8.320568833s

• [SLOW TEST:20.187 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:06:56.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:07:07.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-9cfj7" for this suite.
Dec 31 13:07:49.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:07:49.733: INFO: namespace: e2e-tests-kubelet-test-9cfj7, resource: bindings, ignored listing per whitelist
Dec 31 13:07:49.733: INFO: namespace e2e-tests-kubelet-test-9cfj7 deletion completed in 42.323090107s

• [SLOW TEST:52.790 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:07:49.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 31 13:07:50.196: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-zbnh2" to be "success or failure"
Dec 31 13:07:50.276: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 79.494748ms
Dec 31 13:07:52.453: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256865597s
Dec 31 13:07:54.696: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499600032s
Dec 31 13:07:56.718: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521544051s
Dec 31 13:07:59.417: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22032644s
Dec 31 13:08:01.436: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.239565211s
Dec 31 13:08:03.458: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.261802969s
Dec 31 13:08:05.470: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.273657062s
Dec 31 13:08:07.538: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.341734513s
STEP: Saw pod success
Dec 31 13:08:07.538: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 31 13:08:07.550: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 31 13:08:07.777: INFO: Waiting for pod pod-host-path-test to disappear
Dec 31 13:08:07.790: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:08:07.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-zbnh2" for this suite.
Dec 31 13:08:15.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:08:16.019: INFO: namespace: e2e-tests-hostpath-zbnh2, resource: bindings, ignored listing per whitelist
Dec 31 13:08:16.056: INFO: namespace e2e-tests-hostpath-zbnh2 deletion completed in 8.230539816s

• [SLOW TEST:26.322 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:08:16.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9bac3411-2bce-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 13:08:16.355: INFO: Waiting up to 5m0s for pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-29gvs" to be "success or failure"
Dec 31 13:08:16.595: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 240.075859ms
Dec 31 13:08:18.614: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259047065s
Dec 31 13:08:20.623: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267724693s
Dec 31 13:08:22.726: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370692956s
Dec 31 13:08:24.739: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384288923s
Dec 31 13:08:26.761: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.405695266s
STEP: Saw pod success
Dec 31 13:08:26.761: INFO: Pod "pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:08:26.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 13:08:26.832: INFO: Waiting for pod pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005 to disappear
Dec 31 13:08:27.012: INFO: Pod pod-secrets-9badd9c9-2bce-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:08:27.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-29gvs" for this suite.
Dec 31 13:08:33.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:08:33.423: INFO: namespace: e2e-tests-secrets-29gvs, resource: bindings, ignored listing per whitelist
Dec 31 13:08:33.423: INFO: namespace e2e-tests-secrets-29gvs deletion completed in 6.391497449s

• [SLOW TEST:17.367 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:08:33.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 31 13:08:33.758: INFO: Waiting up to 5m0s for pod "pod-a60afab0-2bce-11ea-a129-0242ac110005" in namespace "e2e-tests-emptydir-kk86c" to be "success or failure"
Dec 31 13:08:33.787: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.227329ms
Dec 31 13:08:35.806: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047759107s
Dec 31 13:08:37.826: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068518261s
Dec 31 13:08:39.985: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227551749s
Dec 31 13:08:42.009: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251265873s
Dec 31 13:08:44.037: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.2788885s
Dec 31 13:08:46.060: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.30260326s
STEP: Saw pod success
Dec 31 13:08:46.061: INFO: Pod "pod-a60afab0-2bce-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:08:46.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a60afab0-2bce-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 13:08:46.281: INFO: Waiting for pod pod-a60afab0-2bce-11ea-a129-0242ac110005 to disappear
Dec 31 13:08:46.298: INFO: Pod pod-a60afab0-2bce-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:08:46.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kk86c" for this suite.
Dec 31 13:08:52.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:08:52.670: INFO: namespace: e2e-tests-emptydir-kk86c, resource: bindings, ignored listing per whitelist
Dec 31 13:08:52.675: INFO: namespace e2e-tests-emptydir-kk86c deletion completed in 6.357278487s

• [SLOW TEST:19.251 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:08:52.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 31 13:08:53.030: INFO: Waiting up to 5m0s for pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005" in namespace "e2e-tests-containers-vmn55" to be "success or failure"
Dec 31 13:08:53.041: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.013174ms
Dec 31 13:08:55.093: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063818819s
Dec 31 13:08:57.113: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083027013s
Dec 31 13:08:59.645: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.615827488s
Dec 31 13:09:01.925: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894878814s
Dec 31 13:09:03.984: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.954319941s
Dec 31 13:09:06.126: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.096791973s
STEP: Saw pod success
Dec 31 13:09:06.127: INFO: Pod "client-containers-b18af17a-2bce-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:09:06.315: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b18af17a-2bce-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 13:09:06.695: INFO: Waiting for pod client-containers-b18af17a-2bce-11ea-a129-0242ac110005 to disappear
Dec 31 13:09:06.741: INFO: Pod client-containers-b18af17a-2bce-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:09:06.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vmn55" for this suite.
Dec 31 13:09:13.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:09:13.224: INFO: namespace: e2e-tests-containers-vmn55, resource: bindings, ignored listing per whitelist
Dec 31 13:09:13.230: INFO: namespace e2e-tests-containers-vmn55 deletion completed in 6.465177018s

• [SLOW TEST:20.555 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:09:13.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xdgkg in namespace e2e-tests-proxy-7x8z7
I1231 13:09:13.521826       8 runners.go:184] Created replication controller with name: proxy-service-xdgkg, namespace: e2e-tests-proxy-7x8z7, replica count: 1
I1231 13:09:14.572612       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:15.573017       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:16.573570       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:17.574044       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:18.574429       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:19.574748       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:20.575388       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:21.576199       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:22.577152       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:09:23.577705       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1231 13:09:24.578214       8 runners.go:184] proxy-service-xdgkg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 31 13:09:24.699: INFO: setup took 11.297752663s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 31 13:09:24.729: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7x8z7/pods/proxy-service-xdgkg-h9qw7/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 31 13:09:49.771: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-cd5ed1ce-2bce-11ea-a129-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-bzzms", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bzzms/pods/pod-submit-remove-cd5ed1ce-2bce-11ea-a129-0242ac110005", UID:"cd602146-2bce-11ea-a994-fa163e34d433", ResourceVersion:"16690502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713394579, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"702228312", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d9nzx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0018c6880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d9nzx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b2a018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010e38c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b2a050)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b2a070)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b2a078), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b2a07c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713394579, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713394588, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713394588, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713394579, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001592b60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001592b80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://2954759db168e532f1f22a1b1694c89e1b5ce6fa991921d901d9157a768a9673"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:10:02.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bzzms" for this suite.
Dec 31 13:10:08.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:10:08.999: INFO: namespace: e2e-tests-pods-bzzms, resource: bindings, ignored listing per whitelist
Dec 31 13:10:09.075: INFO: namespace e2e-tests-pods-bzzms deletion completed in 6.325261859s

• [SLOW TEST:29.558 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:10:09.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 31 13:10:31.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:31.760: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:33.760: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:33.885: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:35.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:35.953: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:37.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:37.775: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:39.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:39.803: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:41.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:41.809: INFO: Pod pod-with-poststart-http-hook still exists
Dec 31 13:10:43.761: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 31 13:10:43.783: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:10:43.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jg7r8" for this suite.
Dec 31 13:11:07.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:11:08.033: INFO: namespace: e2e-tests-container-lifecycle-hook-jg7r8, resource: bindings, ignored listing per whitelist
Dec 31 13:11:08.133: INFO: namespace e2e-tests-container-lifecycle-hook-jg7r8 deletion completed in 24.325668756s

• [SLOW TEST:59.057 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:11:08.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0234e6c6-2bcf-11ea-a129-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 31 13:11:08.580: INFO: Waiting up to 5m0s for pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005" in namespace "e2e-tests-secrets-lgcc6" to be "success or failure"
Dec 31 13:11:08.726: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 146.536407ms
Dec 31 13:11:11.034: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454276962s
Dec 31 13:11:13.073: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493231153s
Dec 31 13:11:16.043: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463043489s
Dec 31 13:11:18.058: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.478242794s
Dec 31 13:11:20.072: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.492573942s
STEP: Saw pod success
Dec 31 13:11:20.073: INFO: Pod "pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:11:20.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 31 13:11:20.504: INFO: Waiting for pod pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005 to disappear
Dec 31 13:11:20.524: INFO: Pod pod-secrets-02544d01-2bcf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:11:20.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lgcc6" for this suite.
Dec 31 13:11:28.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:11:28.806: INFO: namespace: e2e-tests-secrets-lgcc6, resource: bindings, ignored listing per whitelist
Dec 31 13:11:28.813: INFO: namespace e2e-tests-secrets-lgcc6 deletion completed in 8.266458543s
STEP: Destroying namespace "e2e-tests-secret-namespace-vw599" for this suite.
Dec 31 13:11:34.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:11:34.917: INFO: namespace: e2e-tests-secret-namespace-vw599, resource: bindings, ignored listing per whitelist
Dec 31 13:11:35.087: INFO: namespace e2e-tests-secret-namespace-vw599 deletion completed in 6.273537303s

• [SLOW TEST:26.953 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:11:35.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 31 13:11:35.326: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:11:59.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-69gcm" for this suite.
Dec 31 13:12:23.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:12:23.718: INFO: namespace: e2e-tests-init-container-69gcm, resource: bindings, ignored listing per whitelist
Dec 31 13:12:23.784: INFO: namespace e2e-tests-init-container-69gcm deletion completed in 24.292934963s

• [SLOW TEST:48.697 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:12:23.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 31 13:12:24.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690824,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:12:24.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690824,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 31 13:12:34.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690837,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 31 13:12:34.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690837,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 31 13:12:44.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690849,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 13:12:44.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690849,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 31 13:12:54.281: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690862,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 13:12:54.281: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-a,UID:2f6a88bc-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690862,Generation:0,CreationTimestamp:2019-12-31 13:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 31 13:13:04.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-b,UID:474f9a45-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690876,Generation:0,CreationTimestamp:2019-12-31 13:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:13:04.348: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-b,UID:474f9a45-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690876,Generation:0,CreationTimestamp:2019-12-31 13:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 31 13:13:14.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-b,UID:474f9a45-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690888,Generation:0,CreationTimestamp:2019-12-31 13:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:13:14.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lq69z,SelfLink:/api/v1/namespaces/e2e-tests-watch-lq69z/configmaps/e2e-watch-test-configmap-b,UID:474f9a45-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690888,Generation:0,CreationTimestamp:2019-12-31 13:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:13:24.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-lq69z" for this suite.
Dec 31 13:13:32.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:13:32.715: INFO: namespace: e2e-tests-watch-lq69z, resource: bindings, ignored listing per whitelist
Dec 31 13:13:32.854: INFO: namespace e2e-tests-watch-lq69z deletion completed in 8.377412857s

• [SLOW TEST:69.070 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:13:32.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 31 13:13:33.380: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 31 13:13:38.393: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 31 13:13:44.409: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 31 13:13:44.475: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-rd946,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rd946/deployments/test-cleanup-deployment,UID:5f3c082e-2bcf-11ea-a994-fa163e34d433,ResourceVersion:16690947,Generation:1,CreationTimestamp:2019-12-31 13:13:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 31 13:13:44.624: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:13:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rd946" for this suite.
Dec 31 13:13:54.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:13:55.158: INFO: namespace: e2e-tests-deployment-rd946, resource: bindings, ignored listing per whitelist
Dec 31 13:13:55.348: INFO: namespace e2e-tests-deployment-rd946 deletion completed in 10.662580896s

• [SLOW TEST:22.494 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:13:55.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dmgdx
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-dmgdx
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-dmgdx
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-dmgdx
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-dmgdx
Dec 31 13:14:11.268: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dmgdx, name: ss-0, uid: 6ed24926-2bcf-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 31 13:14:11.652: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dmgdx, name: ss-0, uid: 6ed24926-2bcf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 31 13:14:11.886: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dmgdx, name: ss-0, uid: 6ed24926-2bcf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 31 13:14:12.066: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-dmgdx
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-dmgdx
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-dmgdx and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 31 13:14:25.523: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dmgdx
Dec 31 13:14:25.532: INFO: Scaling statefulset ss to 0
Dec 31 13:14:35.590: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 13:14:35.601: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:14:35.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dmgdx" for this suite.
Dec 31 13:14:43.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:14:43.799: INFO: namespace: e2e-tests-statefulset-dmgdx, resource: bindings, ignored listing per whitelist
Dec 31 13:14:43.941: INFO: namespace e2e-tests-statefulset-dmgdx deletion completed in 8.29728804s

• [SLOW TEST:48.592 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:14:43.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:14:52.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7jrmx" for this suite.
Dec 31 13:15:46.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:15:47.282: INFO: namespace: e2e-tests-kubelet-test-7jrmx, resource: bindings, ignored listing per whitelist
Dec 31 13:15:47.451: INFO: namespace e2e-tests-kubelet-test-7jrmx deletion completed in 54.743089719s

• [SLOW TEST:63.510 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:15:47.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 31 13:15:47.845: INFO: Waiting up to 5m0s for pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005" in namespace "e2e-tests-containers-wr4ct" to be "success or failure"
Dec 31 13:15:47.854: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.107182ms
Dec 31 13:15:50.054: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209073843s
Dec 31 13:15:52.079: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233735552s
Dec 31 13:15:54.530: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.684664093s
Dec 31 13:15:56.574: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728526156s
Dec 31 13:15:58.596: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.750323167s
STEP: Saw pod success
Dec 31 13:15:58.596: INFO: Pod "client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:15:58.601: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005 container test-container: 
STEP: delete the pod
Dec 31 13:15:58.758: INFO: Waiting for pod client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005 to disappear
Dec 31 13:15:58.775: INFO: Pod client-containers-a8b444f5-2bcf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:15:58.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wr4ct" for this suite.
Dec 31 13:16:04.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:16:04.881: INFO: namespace: e2e-tests-containers-wr4ct, resource: bindings, ignored listing per whitelist
Dec 31 13:16:04.961: INFO: namespace e2e-tests-containers-wr4ct deletion completed in 6.171603227s

• [SLOW TEST:17.510 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 31 13:16:04.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-dvx6f/configmap-test-b324d94f-2bcf-11ea-a129-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 31 13:16:05.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005" in namespace "e2e-tests-configmap-dvx6f" to be "success or failure"
Dec 31 13:16:05.280: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.605683ms
Dec 31 13:16:07.306: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047441877s
Dec 31 13:16:09.507: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248422826s
Dec 31 13:16:11.518: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260058362s
Dec 31 13:16:13.540: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.281094099s
Dec 31 13:16:15.558: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.299566136s
STEP: Saw pod success
Dec 31 13:16:15.558: INFO: Pod "pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005" satisfied condition "success or failure"
Dec 31 13:16:15.567: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005 container env-test: 
STEP: delete the pod
Dec 31 13:16:16.233: INFO: Waiting for pod pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005 to disappear
Dec 31 13:16:16.259: INFO: Pod pod-configmaps-b32baa63-2bcf-11ea-a129-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 31 13:16:16.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dvx6f" for this suite.
Dec 31 13:16:22.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:16:22.388: INFO: namespace: e2e-tests-configmap-dvx6f, resource: bindings, ignored listing per whitelist
Dec 31 13:16:22.500: INFO: namespace e2e-tests-configmap-dvx6f deletion completed in 6.234073669s

• [SLOW TEST:17.538 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSDec 31 13:16:22.501: INFO: Running AfterSuite actions on all nodes
Dec 31 13:16:22.501: INFO: Running AfterSuite actions on node 1
Dec 31 13:16:22.501: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8947.825 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS