I1230 10:47:05.993122 8 e2e.go:224] Starting e2e run "b82ff88b-2af1-11ea-8970-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577702825 - Will randomize all specs Will run 201 of 2164 specs Dec 30 10:47:06.167: INFO: >>> kubeConfig: /root/.kube/config Dec 30 10:47:06.171: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 30 10:47:06.196: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 30 10:47:06.232: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 30 10:47:06.232: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 30 10:47:06.232: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 30 10:47:06.244: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 30 10:47:06.244: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 30 10:47:06.244: INFO: e2e test version: v1.13.12 Dec 30 10:47:06.245: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:47:06.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset Dec 30 10:47:06.429: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 30 10:47:21.762: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:47:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-8ssfc" for this suite. Dec 30 10:47:46.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:47:46.317: INFO: namespace: e2e-tests-replicaset-8ssfc, resource: bindings, ignored listing per whitelist Dec 30 10:47:46.353: INFO: namespace e2e-tests-replicaset-8ssfc deletion completed in 24.356409571s • [SLOW TEST:40.108 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:47:46.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 30 10:47:47.196: INFO: created pod pod-service-account-defaultsa Dec 30 10:47:47.196: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 30 10:47:47.348: INFO: created pod pod-service-account-mountsa Dec 30 10:47:47.348: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 30 10:47:47.370: INFO: created pod pod-service-account-nomountsa Dec 30 10:47:47.370: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 30 10:47:47.394: INFO: created pod pod-service-account-defaultsa-mountspec Dec 30 10:47:47.395: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 30 10:47:47.465: INFO: created pod pod-service-account-mountsa-mountspec Dec 30 10:47:47.465: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 30 10:47:47.540: INFO: created pod pod-service-account-nomountsa-mountspec Dec 30 10:47:47.540: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 30 10:47:47.564: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 30 10:47:47.564: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 30 10:47:48.433: INFO: created pod pod-service-account-mountsa-nomountspec Dec 30 10:47:48.433: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 30 10:47:48.870: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 30 10:47:48.870: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:47:48.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-nb56t" for this suite. Dec 30 10:48:16.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:48:16.769: INFO: namespace: e2e-tests-svcaccounts-nb56t, resource: bindings, ignored listing per whitelist Dec 30 10:48:16.838: INFO: namespace e2e-tests-svcaccounts-nb56t deletion completed in 26.827855691s • [SLOW TEST:30.485 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:48:16.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 30 10:48:17.173: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559281,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 30 10:48:17.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559282,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 30 10:48:17.174: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559284,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 30 10:48:27.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559298,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 30 10:48:27.373: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559299,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 30 10:48:27.373: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tdxww,SelfLink:/api/v1/namespaces/e2e-tests-watch-tdxww/configmaps/e2e-watch-test-label-changed,UID:e2e2aeb6-2af1-11ea-a994-fa163e34d433,ResourceVersion:16559300,Generation:0,CreationTimestamp:2019-12-30 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:48:27.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-tdxww" for this suite. Dec 30 10:48:33.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:48:33.479: INFO: namespace: e2e-tests-watch-tdxww, resource: bindings, ignored listing per whitelist Dec 30 10:48:33.610: INFO: namespace e2e-tests-watch-tdxww deletion completed in 6.22911268s • [SLOW TEST:16.772 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:48:33.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ed17598b-2af1-11ea-8970-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 30 10:48:34.169: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-wxk6j" to be "success or failure" Dec 30 10:48:34.215: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.431775ms Dec 30 10:48:36.352: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183064315s Dec 30 10:48:38.380: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210456811s Dec 30 10:48:41.076: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.907370901s Dec 30 10:48:43.111: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.941957968s Dec 30 10:48:45.129: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.960093271s STEP: Saw pod success Dec 30 10:48:45.129: INFO: Pod "pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:48:45.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 30 10:48:45.233: INFO: Waiting for pod pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005 to disappear Dec 30 10:48:45.383: INFO: Pod pod-projected-configmaps-ed18e177-2af1-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:48:45.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wxk6j" for this suite. Dec 30 10:48:51.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:48:51.546: INFO: namespace: e2e-tests-projected-wxk6j, resource: bindings, ignored listing per whitelist Dec 30 10:48:51.585: INFO: namespace e2e-tests-projected-wxk6j deletion completed in 6.189291161s • [SLOW TEST:17.974 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:48:51.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 30 10:48:51.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-bx25z" to be "success or failure" Dec 30 10:48:51.846: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.747205ms Dec 30 10:48:53.904: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10035867s Dec 30 10:48:55.933: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129807073s Dec 30 10:48:57.944: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140544366s Dec 30 10:49:00.006: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202751086s Dec 30 10:49:02.021: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217949482s STEP: Saw pod success Dec 30 10:49:02.021: INFO: Pod "downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:49:02.026: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005 container client-container: STEP: delete the pod Dec 30 10:49:02.318: INFO: Waiting for pod downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005 to disappear Dec 30 10:49:02.327: INFO: Pod downwardapi-volume-f79953f7-2af1-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:49:02.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bx25z" for this suite. Dec 30 10:49:08.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:49:08.713: INFO: namespace: e2e-tests-projected-bx25z, resource: bindings, ignored listing per whitelist Dec 30 10:49:08.729: INFO: namespace e2e-tests-projected-bx25z deletion completed in 6.370667042s • [SLOW TEST:17.144 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:49:08.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 30 10:49:08.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:11.271: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 30 10:49:11.271: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 30 10:49:11.505: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 30 10:49:11.730: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 30 10:49:11.871: INFO: scanned /root for discovery docs: Dec 30 10:49:11.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:38.366: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 30 10:49:38.366: INFO: stdout: "Created e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d\nScaling up e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 30 10:49:38.366: INFO: stdout: "Created e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d\nScaling up e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 30 10:49:38.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:38.706: INFO: stderr: "" Dec 30 10:49:38.706: INFO: stdout: "e2e-test-nginx-rc-4hzjl e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d-9p26h " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 30 10:49:43.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:43.929: INFO: stderr: "" Dec 30 10:49:43.929: INFO: stdout: "e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d-9p26h " Dec 30 10:49:43.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d-9p26h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:44.066: INFO: stderr: "" Dec 30 10:49:44.066: INFO: stdout: "true" Dec 30 10:49:44.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d-9p26h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:44.174: INFO: stderr: "" Dec 30 10:49:44.174: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 30 10:49:44.174: INFO: e2e-test-nginx-rc-e7d56a063b284136bdb343f96054ec9d-9p26h is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Dec 30 10:49:44.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-n9brx' Dec 30 10:49:44.279: INFO: stderr: "" Dec 30 10:49:44.279: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:49:44.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n9brx" for this suite. Dec 30 10:50:08.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:50:08.456: INFO: namespace: e2e-tests-kubectl-n9brx, resource: bindings, ignored listing per whitelist Dec 30 10:50:08.631: INFO: namespace e2e-tests-kubectl-n9brx deletion completed in 24.344535965s • [SLOW TEST:59.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:50:08.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-259ebf80-2af2-11ea-8970-0242ac110005 STEP: Creating a pod to test consume secrets Dec 30 10:50:09.109: INFO: Waiting up to 5m0s for pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-b6ffx" to be "success or failure" Dec 30 10:50:09.390: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 281.72863ms Dec 30 10:50:11.411: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302033385s Dec 30 10:50:13.425: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316837705s Dec 30 10:50:15.442: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33364213s Dec 30 10:50:17.472: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363557237s Dec 30 10:50:19.486: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377210054s STEP: Saw pod success Dec 30 10:50:19.486: INFO: Pod "pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:50:19.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 30 10:50:19.777: INFO: Waiting for pod pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005 to disappear Dec 30 10:50:20.169: INFO: Pod pod-secrets-25aeeaf4-2af2-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:50:20.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b6ffx" for this suite. Dec 30 10:50:26.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:50:26.600: INFO: namespace: e2e-tests-secrets-b6ffx, resource: bindings, ignored listing per whitelist Dec 30 10:50:26.648: INFO: namespace e2e-tests-secrets-b6ffx deletion completed in 6.445900259s • [SLOW TEST:18.017 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:50:26.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6n846.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6n846.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 30 10:50:41.027: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.034: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.042: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.049: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.056: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.064: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.074: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.088: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.100: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.108: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-3046650b-2af2-11ea-8970-0242ac110005) Dec 30 10:50:41.177: INFO: Lookups using e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6n846.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord] Dec 30 10:50:46.288: INFO: DNS probes using e2e-tests-dns-6n846/dns-test-3046650b-2af2-11ea-8970-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:50:46.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6n846" for this suite. Dec 30 10:50:52.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:50:52.630: INFO: namespace: e2e-tests-dns-6n846, resource: bindings, ignored listing per whitelist Dec 30 10:50:52.680: INFO: namespace e2e-tests-dns-6n846 deletion completed in 6.22264662s • [SLOW TEST:26.032 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:50:52.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9tvvs STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 30 10:50:52.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 30 10:51:25.298: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9tvvs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 10:51:25.298: INFO: >>> kubeConfig: /root/.kube/config Dec 30 10:51:25.793: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:51:25.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9tvvs" for this suite. Dec 30 10:51:49.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:51:49.995: INFO: namespace: e2e-tests-pod-network-test-9tvvs, resource: bindings, ignored listing per whitelist Dec 30 10:51:50.136: INFO: namespace e2e-tests-pod-network-test-9tvvs deletion completed in 24.310370668s • [SLOW TEST:57.456 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:51:50.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-61ff9550-2af2-11ea-8970-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 30 10:51:50.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-9nlsr" to be "success or failure" Dec 30 10:51:50.354: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.746152ms Dec 30 10:51:52.374: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032246921s Dec 30 10:51:54.395: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052988422s Dec 30 10:51:56.427: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08509354s Dec 30 10:51:58.454: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112457318s Dec 30 10:52:00.478: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135587389s STEP: Saw pod success Dec 30 10:52:00.478: INFO: Pod "pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:52:00.484: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 30 10:52:00.591: INFO: Waiting for pod pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005 to disappear Dec 30 10:52:00.652: INFO: Pod pod-projected-configmaps-6200616a-2af2-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:52:00.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9nlsr" for this suite. Dec 30 10:52:06.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:52:06.859: INFO: namespace: e2e-tests-projected-9nlsr, resource: bindings, ignored listing per whitelist Dec 30 10:52:06.889: INFO: namespace e2e-tests-projected-9nlsr deletion completed in 6.213884482s • [SLOW TEST:16.753 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:52:06.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 30 10:52:29.280: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:29.316: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:31.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:31.334: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:33.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:33.340: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:35.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:35.340: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:37.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:37.331: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:39.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:39.335: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:41.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:41.342: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:43.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:43.329: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:45.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:45.721: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:47.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:47.332: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:49.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:49.325: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:51.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:51.326: INFO: Pod pod-with-prestop-exec-hook still exists Dec 30 10:52:53.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 30 10:52:53.346: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:52:53.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zjtcc" for this suite. Dec 30 10:53:17.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:53:17.498: INFO: namespace: e2e-tests-container-lifecycle-hook-zjtcc, resource: bindings, ignored listing per whitelist Dec 30 10:53:17.567: INFO: namespace e2e-tests-container-lifecycle-hook-zjtcc deletion completed in 24.164430026s • [SLOW TEST:70.678 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:53:17.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-b8qr STEP: Creating a pod to test atomic-volume-subpath Dec 30 10:53:17.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-b8qr" in namespace "e2e-tests-subpath-8896h" to be "success or failure" Dec 30 10:53:17.829: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 30.945299ms Dec 30 10:53:19.876: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078434104s Dec 30 10:53:21.890: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091748113s Dec 30 10:53:23.898: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100516161s Dec 30 10:53:25.946: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1478682s Dec 30 10:53:27.969: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170865728s Dec 30 10:53:29.980: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182106555s Dec 30 10:53:32.004: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.206306211s Dec 30 10:53:34.029: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 16.231641136s Dec 30 10:53:36.060: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 18.26238003s Dec 30 10:53:38.083: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 20.285577612s Dec 30 10:53:40.107: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 22.308820637s Dec 30 10:53:42.123: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 24.325436538s Dec 30 10:53:44.141: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 26.342790621s Dec 30 10:53:46.163: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 28.365579715s Dec 30 10:53:48.178: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 30.380421395s Dec 30 10:53:50.288: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Running", Reason="", readiness=false. Elapsed: 32.490340475s Dec 30 10:53:52.301: INFO: Pod "pod-subpath-test-projected-b8qr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.503364598s STEP: Saw pod success Dec 30 10:53:52.301: INFO: Pod "pod-subpath-test-projected-b8qr" satisfied condition "success or failure" Dec 30 10:53:52.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-b8qr container test-container-subpath-projected-b8qr: STEP: delete the pod Dec 30 10:53:53.245: INFO: Waiting for pod pod-subpath-test-projected-b8qr to disappear Dec 30 10:53:53.464: INFO: Pod pod-subpath-test-projected-b8qr no longer exists STEP: Deleting pod pod-subpath-test-projected-b8qr Dec 30 10:53:53.464: INFO: Deleting pod "pod-subpath-test-projected-b8qr" in namespace "e2e-tests-subpath-8896h" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:53:53.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8896h" for this suite. Dec 30 10:54:01.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:54:01.654: INFO: namespace: e2e-tests-subpath-8896h, resource: bindings, ignored listing per whitelist Dec 30 10:54:01.860: INFO: namespace e2e-tests-subpath-8896h deletion completed in 8.328981287s • [SLOW TEST:44.293 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:54:01.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 30 10:54:02.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 30 10:54:02.277: INFO: stderr: "" Dec 30 10:54:02.277: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:54:02.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zctgq" for this suite. Dec 30 10:54:08.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:54:08.375: INFO: namespace: e2e-tests-kubectl-zctgq, resource: bindings, ignored listing per whitelist Dec 30 10:54:08.460: INFO: namespace e2e-tests-kubectl-zctgq deletion completed in 6.173097086s • [SLOW TEST:6.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:54:08.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1230 10:54:49.702150 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 30 10:54:49.702: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:54:49.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-572xc" for this suite. Dec 30 10:55:13.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:55:13.907: INFO: namespace: e2e-tests-gc-572xc, resource: bindings, ignored listing per whitelist Dec 30 10:55:14.024: INFO: namespace e2e-tests-gc-572xc deletion completed in 24.316009011s • [SLOW TEST:65.564 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:55:14.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 30 10:55:14.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-z5bbp" to be "success or failure" Dec 30 10:55:14.260: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.567146ms Dec 30 10:55:17.041: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.786966096s Dec 30 10:55:19.052: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.797811383s Dec 30 10:55:21.081: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.827027361s Dec 30 10:55:23.134: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879552953s Dec 30 10:55:25.151: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.896542697s Dec 30 10:55:27.168: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.913813445s STEP: Saw pod success Dec 30 10:55:27.168: INFO: Pod "downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:55:27.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005 container client-container: STEP: delete the pod Dec 30 10:55:27.325: INFO: Waiting for pod downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005 to disappear Dec 30 10:55:27.344: INFO: Pod downwardapi-volume-db9073ba-2af2-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:55:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-z5bbp" for this suite. Dec 30 10:55:33.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:55:33.487: INFO: namespace: e2e-tests-downward-api-z5bbp, resource: bindings, ignored listing per whitelist Dec 30 10:55:33.676: INFO: namespace e2e-tests-downward-api-z5bbp deletion completed in 6.313154913s • [SLOW TEST:19.652 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:55:33.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 30 10:55:34.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-r9w2r" to be "success or failure" Dec 30 10:55:34.095: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.79521ms Dec 30 10:55:36.121: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052764353s Dec 30 10:55:38.138: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069608733s Dec 30 10:55:40.152: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08426662s Dec 30 10:55:42.186: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117980056s Dec 30 10:55:44.218: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150091038s STEP: Saw pod success Dec 30 10:55:44.218: INFO: Pod "downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:55:44.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005 container client-container: STEP: delete the pod Dec 30 10:55:44.689: INFO: Waiting for pod downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005 to disappear Dec 30 10:55:44.711: INFO: Pod downwardapi-volume-e75d9f88-2af2-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:55:44.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r9w2r" for this suite. Dec 30 10:55:50.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:55:50.992: INFO: namespace: e2e-tests-projected-r9w2r, resource: bindings, ignored listing per whitelist Dec 30 10:55:51.144: INFO: namespace e2e-tests-projected-r9w2r deletion completed in 6.426471873s • [SLOW TEST:17.468 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:55:51.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 30 10:55:58.637: INFO: 10 pods remaining Dec 30 10:55:58.637: INFO: 10 pods has nil DeletionTimestamp Dec 30 10:55:58.637: INFO: Dec 30 10:56:01.007: INFO: 9 pods remaining Dec 30 10:56:01.007: INFO: 8 pods has nil DeletionTimestamp Dec 30 10:56:01.007: INFO: STEP: Gathering metrics W1230 10:56:01.986607 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 30 10:56:01.986: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:56:01.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vhlgg" for this suite. Dec 30 10:56:16.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:56:16.313: INFO: namespace: e2e-tests-gc-vhlgg, resource: bindings, ignored listing per whitelist Dec 30 10:56:16.320: INFO: namespace e2e-tests-gc-vhlgg deletion completed in 14.307227824s • [SLOW TEST:25.175 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:56:16.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 30 10:56:16.775: INFO: Waiting up to 5m0s for pod "pod-00d21eba-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-lwzxv" to be "success or failure" Dec 30 10:56:16.788: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.906662ms Dec 30 10:56:18.802: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027528915s Dec 30 10:56:20.854: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079574135s Dec 30 10:56:22.871: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096405603s Dec 30 10:56:24.882: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107192462s Dec 30 10:56:26.900: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125484186s STEP: Saw pod success Dec 30 10:56:26.900: INFO: Pod "pod-00d21eba-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:56:26.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-00d21eba-2af3-11ea-8970-0242ac110005 container test-container: STEP: delete the pod Dec 30 10:56:28.408: INFO: Waiting for pod pod-00d21eba-2af3-11ea-8970-0242ac110005 to disappear Dec 30 10:56:28.425: INFO: Pod pod-00d21eba-2af3-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:56:28.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lwzxv" for this suite. Dec 30 10:56:34.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:56:34.589: INFO: namespace: e2e-tests-emptydir-lwzxv, resource: bindings, ignored listing per whitelist Dec 30 10:56:34.633: INFO: namespace e2e-tests-emptydir-lwzxv deletion completed in 6.199220909s • [SLOW TEST:18.313 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:56:34.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 30 10:56:34.970: INFO: Waiting up to 5m0s for pod "pod-0bac7561-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-pp6k7" to be "success or failure" Dec 30 10:56:35.050: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.420386ms Dec 30 10:56:37.086: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11575376s Dec 30 10:56:39.181: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210850608s Dec 30 10:56:41.418: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447513363s Dec 30 10:56:43.432: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461607481s Dec 30 10:56:45.454: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.484417257s STEP: Saw pod success Dec 30 10:56:45.455: INFO: Pod "pod-0bac7561-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure" Dec 30 10:56:45.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0bac7561-2af3-11ea-8970-0242ac110005 container test-container: STEP: delete the pod Dec 30 10:56:46.594: INFO: Waiting for pod pod-0bac7561-2af3-11ea-8970-0242ac110005 to disappear Dec 30 10:56:46.635: INFO: Pod pod-0bac7561-2af3-11ea-8970-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:56:46.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pp6k7" for this suite. Dec 30 10:56:52.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:56:53.053: INFO: namespace: e2e-tests-emptydir-pp6k7, resource: bindings, ignored listing per whitelist Dec 30 10:56:53.105: INFO: namespace e2e-tests-emptydir-pp6k7 deletion completed in 6.329611448s • [SLOW TEST:18.472 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:56:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-2rtd STEP: Creating a pod to test atomic-volume-subpath Dec 30 10:56:53.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2rtd" in namespace "e2e-tests-subpath-btrjw" to be "success or failure" Dec 30 10:56:53.301: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.495305ms Dec 30 10:56:55.316: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03732727s Dec 30 10:56:57.333: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054622382s Dec 30 10:56:59.344: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066064687s Dec 30 10:57:01.357: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078514415s Dec 30 10:57:03.405: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127237048s Dec 30 10:57:05.421: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.14242021s Dec 30 10:57:07.460: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.182147154s Dec 30 10:57:09.476: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 16.197583398s Dec 30 10:57:11.494: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 18.216105645s Dec 30 10:57:13.507: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 20.228883626s Dec 30 10:57:15.521: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 22.242631306s Dec 30 10:57:17.537: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 24.258864972s Dec 30 10:57:19.554: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 26.275293689s Dec 30 10:57:21.574: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 28.296057189s Dec 30 10:57:23.678: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Running", Reason="", readiness=false. Elapsed: 30.399732453s Dec 30 10:57:25.699: INFO: Pod "pod-subpath-test-downwardapi-2rtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.421165967s STEP: Saw pod success Dec 30 10:57:25.699: INFO: Pod "pod-subpath-test-downwardapi-2rtd" satisfied condition "success or failure" Dec 30 10:57:25.705: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-2rtd container test-container-subpath-downwardapi-2rtd: STEP: delete the pod Dec 30 10:57:25.802: INFO: Waiting for pod pod-subpath-test-downwardapi-2rtd to disappear Dec 30 10:57:25.808: INFO: Pod pod-subpath-test-downwardapi-2rtd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2rtd Dec 30 10:57:25.808: INFO: Deleting pod "pod-subpath-test-downwardapi-2rtd" in namespace "e2e-tests-subpath-btrjw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:57:25.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-btrjw" for this suite. Dec 30 10:57:31.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:57:31.988: INFO: namespace: e2e-tests-subpath-btrjw, resource: bindings, ignored listing per whitelist Dec 30 10:57:32.059: INFO: namespace e2e-tests-subpath-btrjw deletion completed in 6.240722346s • [SLOW TEST:38.954 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:57:32.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 30 10:57:32.376: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 30 10:57:37.762: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 30 10:57:41.791: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 30 10:57:43.833: INFO: Creating deployment "test-rollover-deployment" Dec 30 10:57:43.900: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 30 10:57:46.015: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 30 10:57:46.032: INFO: Ensure that both replica sets have 1 created replica Dec 30 10:57:46.045: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 30 10:57:46.064: INFO: Updating deployment test-rollover-deployment Dec 30 10:57:46.064: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 30 10:57:48.323: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 30 10:57:48.605: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 30 10:57:48.630: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:48.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300267, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:57:50.646: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:50.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300267, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:57:52.700: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:52.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300267, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:57:54.883: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:54.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300267, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:57:56.707: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:56.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300267, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:57:58.667: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:57:58.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:58:00.684: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:58:00.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:58:02.651: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:58:02.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:58:04.667: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:58:04.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:58:06.641: INFO: all replica sets need to contain the pod-template-hash label Dec 30 10:58:06.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713300264, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 10:58:09.364: INFO: Dec 30 10:58:09.364: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 30 10:58:09.597: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-zlznw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlznw/deployments/test-rollover-deployment,UID:34bcac8d-2af3-11ea-a994-fa163e34d433,ResourceVersion:16560839,Generation:2,CreationTimestamp:2019-12-30 10:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-30 10:57:44 +0000 UTC 2019-12-30 10:57:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-30 10:58:08 +0000 UTC 2019-12-30 10:57:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 30 10:58:09.628: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-zlznw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlznw/replicasets/test-rollover-deployment-5b8479fdb6,UID:3616013e-2af3-11ea-a994-fa163e34d433,ResourceVersion:16560830,Generation:2,CreationTimestamp:2019-12-30 10:57:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34bcac8d-2af3-11ea-a994-fa163e34d433 0xc001feb5d7 0xc001feb5d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 30 10:58:09.628: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 30 10:58:09.629: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-zlznw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlznw/replicasets/test-rollover-controller,UID:2de11778-2af3-11ea-a994-fa163e34d433,ResourceVersion:16560838,Generation:2,CreationTimestamp:2019-12-30 10:57:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34bcac8d-2af3-11ea-a994-fa163e34d433 0xc001feb337 0xc001feb338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 30 10:58:09.629: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-zlznw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlznw/replicasets/test-rollover-deployment-58494b7559,UID:34d6f6b8-2af3-11ea-a994-fa163e34d433,ResourceVersion:16560796,Generation:2,CreationTimestamp:2019-12-30 10:57:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34bcac8d-2af3-11ea-a994-fa163e34d433 0xc001feb487 0xc001feb488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 30 10:58:09.647: INFO: Pod "test-rollover-deployment-5b8479fdb6-2wfcb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-2wfcb,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-zlznw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zlznw/pods/test-rollover-deployment-5b8479fdb6-2wfcb,UID:369400bf-2af3-11ea-a994-fa163e34d433,ResourceVersion:16560815,Generation:0,CreationTimestamp:2019-12-30 10:57:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 3616013e-2af3-11ea-a994-fa163e34d433 0xc001fc6857 0xc001fc6858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lv8bw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lv8bw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lv8bw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc68c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc68f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 10:57:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 10:57:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 10:57:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 10:57:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-30 10:57:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-30 10:57:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d09a2d2ba9c6e2639b073cb2ae2beb586210c6612a1e2479bbae902d84cc3184}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:58:09.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zlznw" for this suite. Dec 30 10:58:17.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:58:17.917: INFO: namespace: e2e-tests-deployment-zlznw, resource: bindings, ignored listing per whitelist Dec 30 10:58:17.954: INFO: namespace e2e-tests-deployment-zlznw deletion completed in 8.298917807s • [SLOW TEST:45.895 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:58:17.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 30 10:58:18.775: INFO: Waiting up to 5m0s for pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2" in namespace "e2e-tests-svcaccounts-wlqcz" to be "success or failure" Dec 30 10:58:18.856: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 81.202706ms Dec 30 10:58:20.886: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110986297s Dec 30 10:58:22.899: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124255648s Dec 30 10:58:24.913: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138146276s Dec 30 10:58:26.926: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150530631s Dec 30 10:58:28.940: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165081653s Dec 30 10:58:31.486: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.711328608s Dec 30 10:58:33.526: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.751495178s Dec 30 10:58:35.544: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.769388347s STEP: Saw pod success Dec 30 10:58:35.544: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2" satisfied condition "success or failure" Dec 30 10:58:35.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2 container token-test: STEP: delete the pod Dec 30 10:58:35.647: INFO: Waiting for pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2 to disappear Dec 30 10:58:35.652: INFO: Pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-c6wr2 no longer exists STEP: Creating a pod to test consume service account root CA Dec 30 10:58:35.724: INFO: Waiting up to 5m0s for pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd" in namespace "e2e-tests-svcaccounts-wlqcz" to be "success or failure" Dec 30 10:58:35.907: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 183.064623ms Dec 30 10:58:38.046: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322080306s Dec 30 10:58:40.077: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353312603s Dec 30 10:58:42.502: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778527178s Dec 30 10:58:44.671: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.947411915s Dec 30 10:58:47.483: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.759543293s Dec 30 10:58:49.494: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.770684314s Dec 30 10:58:51.512: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.78822853s STEP: Saw pod success Dec 30 10:58:51.512: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd" satisfied condition "success or failure" Dec 30 10:58:51.521: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd container root-ca-test: STEP: delete the pod Dec 30 10:58:51.785: INFO: Waiting for pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd to disappear Dec 30 10:58:51.811: INFO: Pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-sjljd no longer exists STEP: Creating a pod to test consume service account namespace Dec 30 10:58:51.943: INFO: Waiting up to 5m0s for pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg" in namespace "e2e-tests-svcaccounts-wlqcz" to be "success or failure" Dec 30 10:58:51.980: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 37.660931ms Dec 30 10:58:53.992: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04959506s Dec 30 10:58:56.014: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071390594s Dec 30 10:58:58.201: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258137062s Dec 30 10:59:00.213: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270103784s Dec 30 10:59:02.246: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303181606s Dec 30 10:59:04.260: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.31753025s Dec 30 10:59:06.286: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.342770534s Dec 30 10:59:08.304: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.361473502s STEP: Saw pod success Dec 30 10:59:08.304: INFO: Pod "pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg" satisfied condition "success or failure" Dec 30 10:59:08.311: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg container namespace-test: STEP: delete the pod Dec 30 10:59:08.629: INFO: Waiting for pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg to disappear Dec 30 10:59:08.644: INFO: Pod pod-service-account-4981ac4a-2af3-11ea-8970-0242ac110005-xwvbg no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:59:08.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-wlqcz" for this suite. Dec 30 10:59:16.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 10:59:16.726: INFO: namespace: e2e-tests-svcaccounts-wlqcz, resource: bindings, ignored listing per whitelist Dec 30 10:59:16.882: INFO: namespace e2e-tests-svcaccounts-wlqcz deletion completed in 8.223565451s • [SLOW TEST:58.927 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 10:59:16.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-hq68f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hq68f to expose endpoints map[] Dec 30 10:59:17.565: INFO: Get endpoints failed (192.257914ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 30 10:59:18.578: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hq68f exposes endpoints map[] (1.20594435s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-hq68f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hq68f to expose endpoints map[pod1:[100]] Dec 30 10:59:23.456: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.846584494s elapsed, will retry) Dec 30 10:59:26.617: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hq68f exposes endpoints map[pod1:[100]] (8.007289623s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-hq68f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hq68f to expose endpoints map[pod1:[100] pod2:[101]] Dec 30 10:59:30.967: INFO: Unexpected endpoints: found map[6d366961-2af3-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.333480375s elapsed, will retry) Dec 30 10:59:35.076: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hq68f exposes endpoints map[pod1:[100] pod2:[101]] (8.442512016s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-hq68f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hq68f to expose endpoints map[pod2:[101]] Dec 30 10:59:36.694: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hq68f exposes endpoints map[pod2:[101]] (1.596022033s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-hq68f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hq68f to expose endpoints map[] Dec 30 10:59:38.048: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hq68f exposes endpoints map[] (1.336709096s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 30 10:59:38.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-hq68f" for this suite. Dec 30 11:00:02.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 11:00:02.566: INFO: namespace: e2e-tests-services-hq68f, resource: bindings, ignored listing per whitelist Dec 30 11:00:02.586: INFO: namespace e2e-tests-services-hq68f deletion completed in 24.276376376s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:45.704 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 30 11:00:02.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 30 11:00:02.786: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.132905ms)
Dec 30 11:00:02.794: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.866901ms)
Dec 30 11:00:02.894: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 99.702531ms)
Dec 30 11:00:02.912: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.849233ms)
Dec 30 11:00:02.928: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.629355ms)
Dec 30 11:00:02.944: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.000234ms)
Dec 30 11:00:02.950: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.556542ms)
Dec 30 11:00:02.955: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.293081ms)
Dec 30 11:00:02.960: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.568768ms)
Dec 30 11:00:02.964: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.367586ms)
Dec 30 11:00:02.970: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.186259ms)
Dec 30 11:00:02.975: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.143762ms)
Dec 30 11:00:03.168: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 192.93023ms)
Dec 30 11:00:03.184: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.427586ms)
Dec 30 11:00:03.196: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.537356ms)
Dec 30 11:00:03.205: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.823355ms)
Dec 30 11:00:03.211: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.847982ms)
Dec 30 11:00:03.217: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.343831ms)
Dec 30 11:00:03.223: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.87047ms)
Dec 30 11:00:03.228: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.242195ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:00:03.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5gx9c" for this suite.
Dec 30 11:00:09.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:00:09.389: INFO: namespace: e2e-tests-proxy-5gx9c, resource: bindings, ignored listing per whitelist
Dec 30 11:00:09.458: INFO: namespace e2e-tests-proxy-5gx9c deletion completed in 6.224311485s

• [SLOW TEST:6.872 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:00:09.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1230 11:00:25.681670       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 11:00:25.681: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:00:25.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mg9bk" for this suite.
Dec 30 11:00:42.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:00:42.972: INFO: namespace: e2e-tests-gc-mg9bk, resource: bindings, ignored listing per whitelist
Dec 30 11:00:43.087: INFO: namespace e2e-tests-gc-mg9bk deletion completed in 17.391495981s

• [SLOW TEST:33.629 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:00:43.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 30 11:00:46.474: INFO: Waiting up to 5m0s for pod "pod-a16083be-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-lgtk6" to be "success or failure"
Dec 30 11:00:46.541: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.02449ms
Dec 30 11:00:48.699: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22555941s
Dec 30 11:00:50.718: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244345946s
Dec 30 11:00:52.730: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256062165s
Dec 30 11:00:54.757: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283199081s
Dec 30 11:00:56.766: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.292074329s
Dec 30 11:00:58.780: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.306237964s
STEP: Saw pod success
Dec 30 11:00:58.780: INFO: Pod "pod-a16083be-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:00:58.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a16083be-2af3-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:00:58.872: INFO: Waiting for pod pod-a16083be-2af3-11ea-8970-0242ac110005 to disappear
Dec 30 11:00:58.966: INFO: Pod pod-a16083be-2af3-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:00:58.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lgtk6" for this suite.
Dec 30 11:01:05.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:01:05.183: INFO: namespace: e2e-tests-emptydir-lgtk6, resource: bindings, ignored listing per whitelist
Dec 30 11:01:05.254: INFO: namespace e2e-tests-emptydir-lgtk6 deletion completed in 6.280391243s

• [SLOW TEST:22.167 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:01:05.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 30 11:01:05.618: INFO: Number of nodes with available pods: 0
Dec 30 11:01:05.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:06.636: INFO: Number of nodes with available pods: 0
Dec 30 11:01:06.636: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:07.644: INFO: Number of nodes with available pods: 0
Dec 30 11:01:07.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:08.654: INFO: Number of nodes with available pods: 0
Dec 30 11:01:08.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:09.638: INFO: Number of nodes with available pods: 0
Dec 30 11:01:09.638: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:10.847: INFO: Number of nodes with available pods: 0
Dec 30 11:01:10.847: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:11.653: INFO: Number of nodes with available pods: 0
Dec 30 11:01:11.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:12.678: INFO: Number of nodes with available pods: 0
Dec 30 11:01:12.678: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:13.654: INFO: Number of nodes with available pods: 0
Dec 30 11:01:13.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:01:14.659: INFO: Number of nodes with available pods: 1
Dec 30 11:01:14.659: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 30 11:01:14.769: INFO: Number of nodes with available pods: 1
Dec 30 11:01:14.769: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-t4jzq, will wait for the garbage collector to delete the pods
Dec 30 11:01:15.026: INFO: Deleting DaemonSet.extensions daemon-set took: 119.413603ms
Dec 30 11:01:15.227: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.504224ms
Dec 30 11:01:21.135: INFO: Number of nodes with available pods: 0
Dec 30 11:01:21.135: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 11:01:21.141: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t4jzq/daemonsets","resourceVersion":"16561422"},"items":null}

Dec 30 11:01:21.146: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t4jzq/pods","resourceVersion":"16561422"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:01:21.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-t4jzq" for this suite.
Dec 30 11:01:27.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:01:27.484: INFO: namespace: e2e-tests-daemonsets-t4jzq, resource: bindings, ignored listing per whitelist
Dec 30 11:01:27.589: INFO: namespace e2e-tests-daemonsets-t4jzq deletion completed in 6.221760098s

• [SLOW TEST:22.335 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:01:27.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 30 11:01:27.790: INFO: Waiting up to 5m0s for pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-pbm4r" to be "success or failure"
Dec 30 11:01:27.831: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.354915ms
Dec 30 11:01:29.969: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179572962s
Dec 30 11:01:31.991: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201332652s
Dec 30 11:01:34.362: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572011656s
Dec 30 11:01:36.375: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585132601s
Dec 30 11:01:38.386: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.59640439s
STEP: Saw pod success
Dec 30 11:01:38.386: INFO: Pod "downward-api-ba31be2d-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:01:38.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ba31be2d-2af3-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 11:01:38.895: INFO: Waiting for pod downward-api-ba31be2d-2af3-11ea-8970-0242ac110005 to disappear
Dec 30 11:01:38.922: INFO: Pod downward-api-ba31be2d-2af3-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:01:38.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pbm4r" for this suite.
Dec 30 11:01:44.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:01:45.002: INFO: namespace: e2e-tests-downward-api-pbm4r, resource: bindings, ignored listing per whitelist
Dec 30 11:01:45.124: INFO: namespace e2e-tests-downward-api-pbm4r deletion completed in 6.191873576s

• [SLOW TEST:17.533 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:01:45.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 30 11:01:45.406: INFO: Waiting up to 5m0s for pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-z2lrw" to be "success or failure"
Dec 30 11:01:45.428: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.939314ms
Dec 30 11:01:47.443: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036722307s
Dec 30 11:01:49.465: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058778812s
Dec 30 11:01:52.110: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704269619s
Dec 30 11:01:54.127: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721607638s
Dec 30 11:01:56.140: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.734246536s
STEP: Saw pod success
Dec 30 11:01:56.140: INFO: Pod "downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:01:56.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 11:01:57.223: INFO: Waiting for pod downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005 to disappear
Dec 30 11:01:57.487: INFO: Pod downward-api-c4b6a0e8-2af3-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:01:57.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z2lrw" for this suite.
Dec 30 11:02:03.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:02:03.706: INFO: namespace: e2e-tests-downward-api-z2lrw, resource: bindings, ignored listing per whitelist
Dec 30 11:02:03.893: INFO: namespace e2e-tests-downward-api-z2lrw deletion completed in 6.395633579s

• [SLOW TEST:18.770 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:02:03.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 11:02:04.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-7rkfq" to be "success or failure"
Dec 30 11:02:04.398: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.535261ms
Dec 30 11:02:06.590: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221417612s
Dec 30 11:02:08.663: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294648992s
Dec 30 11:02:10.676: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307164403s
Dec 30 11:02:12.701: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.331973676s
Dec 30 11:02:14.711: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.341711541s
STEP: Saw pod success
Dec 30 11:02:14.711: INFO: Pod "downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:02:14.714: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 11:02:14.767: INFO: Waiting for pod downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005 to disappear
Dec 30 11:02:14.814: INFO: Pod downwardapi-volume-d00132c7-2af3-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:02:14.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7rkfq" for this suite.
Dec 30 11:02:20.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:02:20.973: INFO: namespace: e2e-tests-downward-api-7rkfq, resource: bindings, ignored listing per whitelist
Dec 30 11:02:21.216: INFO: namespace e2e-tests-downward-api-7rkfq deletion completed in 6.320909944s

• [SLOW TEST:17.322 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:02:21.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w22jp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 30 11:02:21.447: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 30 11:03:01.642: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-w22jp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 11:03:01.642: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 11:03:02.139: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:03:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-w22jp" for this suite.
Dec 30 11:03:26.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:03:26.413: INFO: namespace: e2e-tests-pod-network-test-w22jp, resource: bindings, ignored listing per whitelist
Dec 30 11:03:26.460: INFO: namespace e2e-tests-pod-network-test-w22jp deletion completed in 24.293697543s

• [SLOW TEST:65.244 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:03:26.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vbb4
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 11:03:26.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vbb4" in namespace "e2e-tests-subpath-lrtd4" to be "success or failure"
Dec 30 11:03:26.900: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.333146ms
Dec 30 11:03:28.913: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030828031s
Dec 30 11:03:30.927: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045376354s
Dec 30 11:03:33.406: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.52432308s
Dec 30 11:03:35.434: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552222138s
Dec 30 11:03:37.462: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.580060848s
Dec 30 11:03:39.675: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.79280773s
Dec 30 11:03:42.392: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=true. Elapsed: 15.510219931s
Dec 30 11:03:44.414: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 17.532192285s
Dec 30 11:03:46.449: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 19.567009065s
Dec 30 11:03:48.472: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 21.590053133s
Dec 30 11:03:50.516: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 23.634444044s
Dec 30 11:03:52.567: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 25.685102818s
Dec 30 11:03:54.597: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 27.715384888s
Dec 30 11:03:56.619: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 29.737059128s
Dec 30 11:03:58.652: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 31.770201634s
Dec 30 11:04:00.685: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Running", Reason="", readiness=false. Elapsed: 33.803186434s
Dec 30 11:04:02.701: INFO: Pod "pod-subpath-test-configmap-vbb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.818638663s
STEP: Saw pod success
Dec 30 11:04:02.701: INFO: Pod "pod-subpath-test-configmap-vbb4" satisfied condition "success or failure"
Dec 30 11:04:02.709: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-vbb4 container test-container-subpath-configmap-vbb4: 
STEP: delete the pod
Dec 30 11:04:02.895: INFO: Waiting for pod pod-subpath-test-configmap-vbb4 to disappear
Dec 30 11:04:02.913: INFO: Pod pod-subpath-test-configmap-vbb4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vbb4
Dec 30 11:04:02.913: INFO: Deleting pod "pod-subpath-test-configmap-vbb4" in namespace "e2e-tests-subpath-lrtd4"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:04:02.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-lrtd4" for this suite.
Dec 30 11:04:10.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:04:11.048: INFO: namespace: e2e-tests-subpath-lrtd4, resource: bindings, ignored listing per whitelist
Dec 30 11:04:11.122: INFO: namespace e2e-tests-subpath-lrtd4 deletion completed in 8.17746834s

• [SLOW TEST:44.661 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:04:11.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:04:17.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-jkmc9" for this suite.
Dec 30 11:04:24.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:04:24.156: INFO: namespace: e2e-tests-namespaces-jkmc9, resource: bindings, ignored listing per whitelist
Dec 30 11:04:24.264: INFO: namespace e2e-tests-namespaces-jkmc9 deletion completed in 6.426064475s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7r4x7" for this suite.
Dec 30 11:04:24.295: INFO: Namespace e2e-tests-nsdeletetest-7r4x7 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-zpvgx" for this suite.
Dec 30 11:04:30.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:04:30.434: INFO: namespace: e2e-tests-nsdeletetest-zpvgx, resource: bindings, ignored listing per whitelist
Dec 30 11:04:30.615: INFO: namespace e2e-tests-nsdeletetest-zpvgx deletion completed in 6.320132318s

• [SLOW TEST:19.493 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:04:30.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-274c426e-2af4-11ea-8970-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-274c42e4-2af4-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-274c426e-2af4-11ea-8970-0242ac110005
STEP: Updating configmap cm-test-opt-upd-274c42e4-2af4-11ea-8970-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-274c4305-2af4-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:06:17.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5tnd5" for this suite.
Dec 30 11:06:41.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:06:42.076: INFO: namespace: e2e-tests-projected-5tnd5, resource: bindings, ignored listing per whitelist
Dec 30 11:06:42.099: INFO: namespace e2e-tests-projected-5tnd5 deletion completed in 24.254474999s

• [SLOW TEST:131.484 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:06:42.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:06:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 30 11:06:42.394: INFO: stderr: ""
Dec 30 11:06:42.394: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 30 11:06:42.399: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:06:42.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gfvwc" for this suite.
Dec 30 11:06:48.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:06:48.865: INFO: namespace: e2e-tests-kubectl-gfvwc, resource: bindings, ignored listing per whitelist
Dec 30 11:06:48.875: INFO: namespace e2e-tests-kubectl-gfvwc deletion completed in 6.432620223s

S [SKIPPING] [6.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 30 11:06:42.399: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:06:48.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 30 11:06:49.082: INFO: Waiting up to 5m0s for pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-54pf2" to be "success or failure"
Dec 30 11:06:49.095: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.748724ms
Dec 30 11:06:51.343: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260671083s
Dec 30 11:06:53.380: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297897429s
Dec 30 11:06:55.397: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314530946s
Dec 30 11:06:57.406: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323640524s
Dec 30 11:06:59.420: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33781094s
STEP: Saw pod success
Dec 30 11:06:59.420: INFO: Pod "downward-api-79b49eb1-2af4-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:06:59.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-79b49eb1-2af4-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 11:06:59.504: INFO: Waiting for pod downward-api-79b49eb1-2af4-11ea-8970-0242ac110005 to disappear
Dec 30 11:06:59.521: INFO: Pod downward-api-79b49eb1-2af4-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:06:59.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-54pf2" for this suite.
Dec 30 11:07:05.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:07:05.988: INFO: namespace: e2e-tests-downward-api-54pf2, resource: bindings, ignored listing per whitelist
Dec 30 11:07:05.988: INFO: namespace e2e-tests-downward-api-54pf2 deletion completed in 6.452185539s

• [SLOW TEST:17.113 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:07:05.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 11:07:06.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bx49d'
Dec 30 11:07:08.258: INFO: stderr: ""
Dec 30 11:07:08.258: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 30 11:07:18.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bx49d -o json'
Dec 30 11:07:18.468: INFO: stderr: ""
Dec 30 11:07:18.468: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-30T11:07:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-bx49d\",\n        \"resourceVersion\": \"16562141\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-bx49d/pods/e2e-test-nginx-pod\",\n        \"uid\": \"85216738-2af4-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-2vzgb\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-2vzgb\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-2vzgb\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T11:07:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T11:07:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T11:07:18Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T11:07:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://9ae06b7f4bf589b67d0fa02a7e5a0d063a113e063f75f797e8f4bcc94f5aa7cf\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-30T11:07:16Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-30T11:07:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 30 11:07:18.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-bx49d'
Dec 30 11:07:18.810: INFO: stderr: ""
Dec 30 11:07:18.810: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 30 11:07:18.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bx49d'
Dec 30 11:07:32.614: INFO: stderr: ""
Dec 30 11:07:32.614: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:07:32.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bx49d" for this suite.
Dec 30 11:07:38.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:07:38.868: INFO: namespace: e2e-tests-kubectl-bx49d, resource: bindings, ignored listing per whitelist
Dec 30 11:07:38.882: INFO: namespace e2e-tests-kubectl-bx49d deletion completed in 6.193134088s

• [SLOW TEST:32.893 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:07:38.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1230 11:07:42.175296       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 11:07:42.175: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:07:42.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-6zmgj" for this suite.
Dec 30 11:07:48.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:07:48.285: INFO: namespace: e2e-tests-gc-6zmgj, resource: bindings, ignored listing per whitelist
Dec 30 11:07:48.349: INFO: namespace e2e-tests-gc-6zmgj deletion completed in 6.169954923s

• [SLOW TEST:9.467 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:07:48.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-9tp8
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 11:07:48.646: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9tp8" in namespace "e2e-tests-subpath-2w67q" to be "success or failure"
Dec 30 11:07:48.683: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 37.333766ms
Dec 30 11:07:50.692: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046858132s
Dec 30 11:07:52.713: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067074508s
Dec 30 11:07:55.079: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433208351s
Dec 30 11:07:57.086: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440330248s
Dec 30 11:07:59.144: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.498717002s
Dec 30 11:08:01.209: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.563380453s
Dec 30 11:08:03.219: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 14.573850784s
Dec 30 11:08:05.228: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 16.582001071s
Dec 30 11:08:07.278: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 18.632270168s
Dec 30 11:08:09.381: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 20.735216687s
Dec 30 11:08:11.416: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 22.770458151s
Dec 30 11:08:13.494: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 24.848417318s
Dec 30 11:08:15.533: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 26.887557336s
Dec 30 11:08:17.555: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 28.909224883s
Dec 30 11:08:19.602: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Running", Reason="", readiness=false. Elapsed: 30.956106933s
Dec 30 11:08:21.628: INFO: Pod "pod-subpath-test-configmap-9tp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.98235004s
STEP: Saw pod success
Dec 30 11:08:21.628: INFO: Pod "pod-subpath-test-configmap-9tp8" satisfied condition "success or failure"
Dec 30 11:08:21.633: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-9tp8 container test-container-subpath-configmap-9tp8: 
STEP: delete the pod
Dec 30 11:08:21.934: INFO: Waiting for pod pod-subpath-test-configmap-9tp8 to disappear
Dec 30 11:08:22.162: INFO: Pod pod-subpath-test-configmap-9tp8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9tp8
Dec 30 11:08:22.162: INFO: Deleting pod "pod-subpath-test-configmap-9tp8" in namespace "e2e-tests-subpath-2w67q"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:08:22.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-2w67q" for this suite.
Dec 30 11:08:28.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:08:28.700: INFO: namespace: e2e-tests-subpath-2w67q, resource: bindings, ignored listing per whitelist
Dec 30 11:08:28.704: INFO: namespace e2e-tests-subpath-2w67q deletion completed in 6.497108293s

• [SLOW TEST:40.355 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:08:28.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-ddwgl/secret-test-b53cfdce-2af4-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 11:08:29.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-ddwgl" to be "success or failure"
Dec 30 11:08:29.054: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.88096ms
Dec 30 11:08:31.071: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029675673s
Dec 30 11:08:33.837: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.79545942s
Dec 30 11:08:35.854: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81175514s
Dec 30 11:08:38.083: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.04163347s
STEP: Saw pod success
Dec 30 11:08:38.084: INFO: Pod "pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:08:38.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005 container env-test: 
STEP: delete the pod
Dec 30 11:08:38.211: INFO: Waiting for pod pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005 to disappear
Dec 30 11:08:38.222: INFO: Pod pod-configmaps-b53e07d9-2af4-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:08:38.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ddwgl" for this suite.
Dec 30 11:08:44.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:08:44.486: INFO: namespace: e2e-tests-secrets-ddwgl, resource: bindings, ignored listing per whitelist
Dec 30 11:08:44.569: INFO: namespace e2e-tests-secrets-ddwgl deletion completed in 6.342062103s

• [SLOW TEST:15.865 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:08:44.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 30 11:08:44.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:45.172: INFO: stderr: ""
Dec 30 11:08:45.172: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 11:08:45.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:45.373: INFO: stderr: ""
Dec 30 11:08:45.373: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
Dec 30 11:08:45.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5fvlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:45.559: INFO: stderr: ""
Dec 30 11:08:45.559: INFO: stdout: ""
Dec 30 11:08:45.559: INFO: update-demo-nautilus-5fvlm is created but not running
Dec 30 11:08:50.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:50.705: INFO: stderr: ""
Dec 30 11:08:50.705: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
Dec 30 11:08:50.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5fvlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:50.835: INFO: stderr: ""
Dec 30 11:08:50.835: INFO: stdout: ""
Dec 30 11:08:50.835: INFO: update-demo-nautilus-5fvlm is created but not running
Dec 30 11:08:55.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:55.985: INFO: stderr: ""
Dec 30 11:08:55.985: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
Dec 30 11:08:55.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5fvlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:08:56.122: INFO: stderr: ""
Dec 30 11:08:56.122: INFO: stdout: ""
Dec 30 11:08:56.122: INFO: update-demo-nautilus-5fvlm is created but not running
Dec 30 11:09:01.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:01.336: INFO: stderr: ""
Dec 30 11:09:01.336: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
Dec 30 11:09:01.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5fvlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:01.499: INFO: stderr: ""
Dec 30 11:09:01.499: INFO: stdout: "true"
Dec 30 11:09:01.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5fvlm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:01.590: INFO: stderr: ""
Dec 30 11:09:01.590: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:09:01.590: INFO: validating pod update-demo-nautilus-5fvlm
Dec 30 11:09:01.627: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:09:01.628: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:09:01.628: INFO: update-demo-nautilus-5fvlm is verified up and running
Dec 30 11:09:01.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:01.760: INFO: stderr: ""
Dec 30 11:09:01.760: INFO: stdout: "true"
Dec 30 11:09:01.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:01.866: INFO: stderr: ""
Dec 30 11:09:01.867: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:09:01.867: INFO: validating pod update-demo-nautilus-ls7pg
Dec 30 11:09:01.883: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:09:01.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:09:01.883: INFO: update-demo-nautilus-ls7pg is verified up and running
STEP: scaling down the replication controller
Dec 30 11:09:01.893: INFO: scanned /root for discovery docs: 
Dec 30 11:09:01.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:03.563: INFO: stderr: ""
Dec 30 11:09:03.564: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 11:09:03.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:03.727: INFO: stderr: ""
Dec 30 11:09:03.727: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 30 11:09:08.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:08.911: INFO: stderr: ""
Dec 30 11:09:08.911: INFO: stdout: "update-demo-nautilus-5fvlm update-demo-nautilus-ls7pg "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 30 11:09:13.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:14.061: INFO: stderr: ""
Dec 30 11:09:14.061: INFO: stdout: "update-demo-nautilus-ls7pg "
Dec 30 11:09:14.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:14.166: INFO: stderr: ""
Dec 30 11:09:14.166: INFO: stdout: "true"
Dec 30 11:09:14.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:14.256: INFO: stderr: ""
Dec 30 11:09:14.256: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:09:14.256: INFO: validating pod update-demo-nautilus-ls7pg
Dec 30 11:09:14.265: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:09:14.265: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:09:14.265: INFO: update-demo-nautilus-ls7pg is verified up and running
STEP: scaling up the replication controller
Dec 30 11:09:14.267: INFO: scanned /root for discovery docs: 
Dec 30 11:09:14.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:15.421: INFO: stderr: ""
Dec 30 11:09:15.421: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 11:09:15.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:15.545: INFO: stderr: ""
Dec 30 11:09:15.545: INFO: stdout: "update-demo-nautilus-d78cf update-demo-nautilus-ls7pg "
Dec 30 11:09:15.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d78cf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:15.670: INFO: stderr: ""
Dec 30 11:09:15.670: INFO: stdout: ""
Dec 30 11:09:15.670: INFO: update-demo-nautilus-d78cf is created but not running
Dec 30 11:09:20.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:20.939: INFO: stderr: ""
Dec 30 11:09:20.939: INFO: stdout: "update-demo-nautilus-d78cf update-demo-nautilus-ls7pg "
Dec 30 11:09:20.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d78cf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:21.068: INFO: stderr: ""
Dec 30 11:09:21.068: INFO: stdout: ""
Dec 30 11:09:21.068: INFO: update-demo-nautilus-d78cf is created but not running
Dec 30 11:09:26.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.220: INFO: stderr: ""
Dec 30 11:09:26.220: INFO: stdout: "update-demo-nautilus-d78cf update-demo-nautilus-ls7pg "
Dec 30 11:09:26.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d78cf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.348: INFO: stderr: ""
Dec 30 11:09:26.348: INFO: stdout: "true"
Dec 30 11:09:26.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d78cf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.458: INFO: stderr: ""
Dec 30 11:09:26.458: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:09:26.458: INFO: validating pod update-demo-nautilus-d78cf
Dec 30 11:09:26.514: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:09:26.514: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:09:26.514: INFO: update-demo-nautilus-d78cf is verified up and running
Dec 30 11:09:26.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.639: INFO: stderr: ""
Dec 30 11:09:26.639: INFO: stdout: "true"
Dec 30 11:09:26.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls7pg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.735: INFO: stderr: ""
Dec 30 11:09:26.735: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:09:26.735: INFO: validating pod update-demo-nautilus-ls7pg
Dec 30 11:09:26.760: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:09:26.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:09:26.760: INFO: update-demo-nautilus-ls7pg is verified up and running
STEP: using delete to clean up resources
Dec 30 11:09:26.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:26.865: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 11:09:26.865: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 30 11:09:26.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-lsgww'
Dec 30 11:09:27.096: INFO: stderr: "No resources found.\n"
Dec 30 11:09:27.096: INFO: stdout: ""
Dec 30 11:09:27.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-lsgww -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 11:09:27.250: INFO: stderr: ""
Dec 30 11:09:27.250: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:09:27.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lsgww" for this suite.
Dec 30 11:09:51.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:09:51.420: INFO: namespace: e2e-tests-kubectl-lsgww, resource: bindings, ignored listing per whitelist
Dec 30 11:09:51.613: INFO: namespace e2e-tests-kubectl-lsgww deletion completed in 24.342279844s

• [SLOW TEST:67.044 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:09:51.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 30 11:09:51.995: INFO: Waiting up to 5m0s for pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005" in namespace "e2e-tests-containers-nr6dd" to be "success or failure"
Dec 30 11:09:52.017: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.755371ms
Dec 30 11:09:54.310: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314969297s
Dec 30 11:09:56.325: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329806988s
Dec 30 11:09:58.337: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341148986s
Dec 30 11:10:00.350: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354288606s
Dec 30 11:10:02.369: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.373251604s
STEP: Saw pod success
Dec 30 11:10:02.369: INFO: Pod "client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:10:02.373: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:10:02.671: INFO: Waiting for pod client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005 to disappear
Dec 30 11:10:02.685: INFO: Pod client-containers-e6bd5b15-2af4-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:10:02.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-nr6dd" for this suite.
Dec 30 11:10:08.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:10:08.880: INFO: namespace: e2e-tests-containers-nr6dd, resource: bindings, ignored listing per whitelist
Dec 30 11:10:08.950: INFO: namespace e2e-tests-containers-nr6dd deletion completed in 6.259498531s

• [SLOW TEST:17.337 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:10:08.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-f0f1a1a4-2af4-11ea-8970-0242ac110005
STEP: Creating secret with name s-test-opt-upd-f0f1a206-2af4-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f0f1a1a4-2af4-11ea-8970-0242ac110005
STEP: Updating secret s-test-opt-upd-f0f1a206-2af4-11ea-8970-0242ac110005
STEP: Creating secret with name s-test-opt-create-f0f1a228-2af4-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:11:50.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lpnff" for this suite.
Dec 30 11:12:30.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:12:30.318: INFO: namespace: e2e-tests-secrets-lpnff, resource: bindings, ignored listing per whitelist
Dec 30 11:12:30.342: INFO: namespace e2e-tests-secrets-lpnff deletion completed in 40.275033086s

• [SLOW TEST:141.392 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:12:30.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-4r7x9
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-4r7x9
STEP: Deleting pre-stop pod
Dec 30 11:12:55.765: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:12:55.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-4r7x9" for this suite.
Dec 30 11:13:35.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:13:36.087: INFO: namespace: e2e-tests-prestop-4r7x9, resource: bindings, ignored listing per whitelist
Dec 30 11:13:36.121: INFO: namespace e2e-tests-prestop-4r7x9 deletion completed in 40.285010189s

• [SLOW TEST:65.778 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:13:36.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 30 11:13:36.333: INFO: Waiting up to 5m0s for pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-jm8h2" to be "success or failure"
Dec 30 11:13:36.384: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.861335ms
Dec 30 11:13:38.400: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067576019s
Dec 30 11:13:40.434: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101700209s
Dec 30 11:13:43.241: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908514231s
Dec 30 11:13:45.257: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.924312576s
Dec 30 11:13:47.271: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.938704544s
STEP: Saw pod success
Dec 30 11:13:47.272: INFO: Pod "pod-6c74ed4d-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:13:47.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6c74ed4d-2af5-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:13:48.425: INFO: Waiting for pod pod-6c74ed4d-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:13:48.470: INFO: Pod pod-6c74ed4d-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:13:48.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jm8h2" for this suite.
Dec 30 11:13:54.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:13:54.765: INFO: namespace: e2e-tests-emptydir-jm8h2, resource: bindings, ignored listing per whitelist
Dec 30 11:13:54.824: INFO: namespace e2e-tests-emptydir-jm8h2 deletion completed in 6.317491066s

• [SLOW TEST:18.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:13:54.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1230 11:14:05.215750       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 11:14:05.215: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:14:05.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fj2v2" for this suite.
Dec 30 11:14:11.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:14:11.498: INFO: namespace: e2e-tests-gc-fj2v2, resource: bindings, ignored listing per whitelist
Dec 30 11:14:11.541: INFO: namespace e2e-tests-gc-fj2v2 deletion completed in 6.312978957s

• [SLOW TEST:16.716 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:14:11.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8187f71e-2af5-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 11:14:11.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-ffpxv" to be "success or failure"
Dec 30 11:14:11.767: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.654021ms
Dec 30 11:14:14.164: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42704064s
Dec 30 11:14:16.180: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442670979s
Dec 30 11:14:18.346: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608760072s
Dec 30 11:14:20.360: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622600501s
Dec 30 11:14:22.562: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.82467866s
STEP: Saw pod success
Dec 30 11:14:22.562: INFO: Pod "pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:14:22.591: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 11:14:23.440: INFO: Waiting for pod pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:14:23.507: INFO: Pod pod-projected-configmaps-8189150d-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:14:23.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ffpxv" for this suite.
Dec 30 11:14:29.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:14:29.609: INFO: namespace: e2e-tests-projected-ffpxv, resource: bindings, ignored listing per whitelist
Dec 30 11:14:29.713: INFO: namespace e2e-tests-projected-ffpxv deletion completed in 6.195059515s

• [SLOW TEST:18.172 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:14:29.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 30 11:14:29.975: INFO: Waiting up to 5m0s for pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-4h2nf" to be "success or failure"
Dec 30 11:14:29.991: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.718511ms
Dec 30 11:14:32.005: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030254574s
Dec 30 11:14:34.026: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050497851s
Dec 30 11:14:36.045: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06988203s
Dec 30 11:14:38.058: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082498236s
Dec 30 11:14:40.068: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092614544s
STEP: Saw pod success
Dec 30 11:14:40.068: INFO: Pod "downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:14:40.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 11:14:40.197: INFO: Waiting for pod downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:14:40.215: INFO: Pod downward-api-8c6c1c4f-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:14:40.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4h2nf" for this suite.
Dec 30 11:14:46.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:14:46.479: INFO: namespace: e2e-tests-downward-api-4h2nf, resource: bindings, ignored listing per whitelist
Dec 30 11:14:46.664: INFO: namespace e2e-tests-downward-api-4h2nf deletion completed in 6.427589633s

• [SLOW TEST:16.951 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:14:46.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 11:14:46.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-b7q6z" to be "success or failure"
Dec 30 11:14:46.849: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.526675ms
Dec 30 11:14:49.424: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586696438s
Dec 30 11:14:51.445: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.607980341s
Dec 30 11:14:53.460: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623056818s
Dec 30 11:14:55.472: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634519223s
Dec 30 11:14:57.490: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652698678s
Dec 30 11:14:59.502: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.664701992s
STEP: Saw pod success
Dec 30 11:14:59.502: INFO: Pod "downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:14:59.507: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 11:14:59.591: INFO: Waiting for pod downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:14:59.601: INFO: Pod downwardapi-volume-96767e80-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:14:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b7q6z" for this suite.
Dec 30 11:15:05.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:15:05.902: INFO: namespace: e2e-tests-projected-b7q6z, resource: bindings, ignored listing per whitelist
Dec 30 11:15:05.942: INFO: namespace e2e-tests-projected-b7q6z deletion completed in 6.312547917s

• [SLOW TEST:19.278 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:15:05.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 30 11:15:06.239: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-pflbt" to be "success or failure"
Dec 30 11:15:06.280: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 41.490893ms
Dec 30 11:15:08.357: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118622117s
Dec 30 11:15:10.399: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159986854s
Dec 30 11:15:12.423: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184629232s
Dec 30 11:15:14.847: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608326539s
Dec 30 11:15:16.877: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63784308s
Dec 30 11:15:18.892: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.653611625s
STEP: Saw pod success
Dec 30 11:15:18.892: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 30 11:15:18.896: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 30 11:15:19.170: INFO: Waiting for pod pod-host-path-test to disappear
Dec 30 11:15:19.179: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:15:19.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-pflbt" for this suite.
Dec 30 11:15:26.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:15:26.383: INFO: namespace: e2e-tests-hostpath-pflbt, resource: bindings, ignored listing per whitelist
Dec 30 11:15:27.473: INFO: namespace e2e-tests-hostpath-pflbt deletion completed in 8.284408896s

• [SLOW TEST:21.530 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:15:27.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 30 11:15:46.230: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:46.245: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:48.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:48.282: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:50.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:50.265: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:52.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:52.258: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:54.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:54.256: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:56.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:56.261: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:15:58.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:15:58.260: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:16:00.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:16:00.261: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:16:02.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:16:02.274: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 11:16:04.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 11:16:04.263: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:16:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-klgbz" for this suite.
Dec 30 11:16:28.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:16:28.338: INFO: namespace: e2e-tests-container-lifecycle-hook-klgbz, resource: bindings, ignored listing per whitelist
Dec 30 11:16:28.652: INFO: namespace e2e-tests-container-lifecycle-hook-klgbz deletion completed in 24.380713901s

• [SLOW TEST:61.179 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:16:28.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 30 11:16:28.911: INFO: Waiting up to 5m0s for pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-containers-s4597" to be "success or failure"
Dec 30 11:16:28.925: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.900998ms
Dec 30 11:16:30.942: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030299773s
Dec 30 11:16:32.988: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076845906s
Dec 30 11:16:35.012: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100962943s
Dec 30 11:16:37.183: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272212783s
Dec 30 11:16:39.211: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.299302643s
STEP: Saw pod success
Dec 30 11:16:39.211: INFO: Pod "client-containers-d33c8964-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:16:39.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d33c8964-2af5-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:16:39.277: INFO: Waiting for pod client-containers-d33c8964-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:16:39.322: INFO: Pod client-containers-d33c8964-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:16:39.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-s4597" for this suite.
Dec 30 11:16:45.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:16:45.466: INFO: namespace: e2e-tests-containers-s4597, resource: bindings, ignored listing per whitelist
Dec 30 11:16:45.552: INFO: namespace e2e-tests-containers-s4597 deletion completed in 6.221389094s

• [SLOW TEST:16.900 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:16:45.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-dd59dc80-2af5-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 11:16:45.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-mpjnb" to be "success or failure"
Dec 30 11:16:45.870: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.001602ms
Dec 30 11:16:47.888: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034453049s
Dec 30 11:16:49.899: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045269066s
Dec 30 11:16:51.921: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067036043s
Dec 30 11:16:53.967: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113736281s
STEP: Saw pod success
Dec 30 11:16:53.968: INFO: Pod "pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:16:53.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 11:16:54.309: INFO: Waiting for pod pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005 to disappear
Dec 30 11:16:54.320: INFO: Pod pod-configmaps-dd5b959b-2af5-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:16:54.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mpjnb" for this suite.
Dec 30 11:17:00.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:17:00.394: INFO: namespace: e2e-tests-configmap-mpjnb, resource: bindings, ignored listing per whitelist
Dec 30 11:17:00.696: INFO: namespace e2e-tests-configmap-mpjnb deletion completed in 6.365775057s

• [SLOW TEST:15.144 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:17:00.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 30 11:17:00.876: INFO: PodSpec: initContainers in spec.initContainers
Dec 30 11:18:04.627: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e662273f-2af5-11ea-8970-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-wpf2p", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-wpf2p/pods/pod-init-e662273f-2af5-11ea-8970-0242ac110005", UID:"e6633047-2af5-11ea-a994-fa163e34d433", ResourceVersion:"16563491", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713301420, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"876787373"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nf62d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023771c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf62d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf62d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf62d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00231c678), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002360120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00231c6f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00231c710)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00231c718), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00231c71c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713301421, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713301421, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713301421, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713301420, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002402920), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002402960), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002352310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e8da36d96f29864d1ad0e23e2da03d018e0cdaccc5fbe1a6f8c18eab14cd9ff9"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002402980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002402940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:18:04.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wpf2p" for this suite.
Dec 30 11:18:29.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:18:29.940: INFO: namespace: e2e-tests-init-container-wpf2p, resource: bindings, ignored listing per whitelist
Dec 30 11:18:29.940: INFO: namespace e2e-tests-init-container-wpf2p deletion completed in 24.758661181s

• [SLOW TEST:89.244 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:18:29.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1ba69d66-2af6-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:18:44.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zmwdr" for this suite.
Dec 30 11:19:08.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:19:08.729: INFO: namespace: e2e-tests-configmap-zmwdr, resource: bindings, ignored listing per whitelist
Dec 30 11:19:08.739: INFO: namespace e2e-tests-configmap-zmwdr deletion completed in 24.388108879s

• [SLOW TEST:38.799 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:19:08.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-32b5dfea-2af6-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 11:19:08.957: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-5shph" to be "success or failure"
Dec 30 11:19:08.964: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.344206ms
Dec 30 11:19:11.343: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386434218s
Dec 30 11:19:13.351: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394216141s
Dec 30 11:19:15.367: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409902028s
Dec 30 11:19:17.512: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555165226s
Dec 30 11:19:19.526: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569136143s
STEP: Saw pod success
Dec 30 11:19:19.526: INFO: Pod "pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:19:19.531: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 11:19:19.645: INFO: Waiting for pod pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005 to disappear
Dec 30 11:19:19.725: INFO: Pod pod-projected-secrets-32b6b632-2af6-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:19:19.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5shph" for this suite.
Dec 30 11:19:25.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:19:25.981: INFO: namespace: e2e-tests-projected-5shph, resource: bindings, ignored listing per whitelist
Dec 30 11:19:26.002: INFO: namespace e2e-tests-projected-5shph deletion completed in 6.245381364s

• [SLOW TEST:17.263 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:19:26.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 30 11:19:26.184: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:19:43.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wtpsp" for this suite.
Dec 30 11:19:49.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:19:49.476: INFO: namespace: e2e-tests-init-container-wtpsp, resource: bindings, ignored listing per whitelist
Dec 30 11:19:49.479: INFO: namespace e2e-tests-init-container-wtpsp deletion completed in 6.195849417s

• [SLOW TEST:23.476 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:19:49.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:19:49.774: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.486757ms)
Dec 30 11:19:49.857: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.8492ms)
Dec 30 11:19:49.874: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.044325ms)
Dec 30 11:19:49.884: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.891265ms)
Dec 30 11:19:49.892: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.073183ms)
Dec 30 11:19:49.899: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.393173ms)
Dec 30 11:19:49.904: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.037096ms)
Dec 30 11:19:49.908: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.41911ms)
Dec 30 11:19:49.914: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.736925ms)
Dec 30 11:19:49.919: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.454625ms)
Dec 30 11:19:49.923: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.168322ms)
Dec 30 11:19:49.927: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.259365ms)
Dec 30 11:19:49.931: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.561207ms)
Dec 30 11:19:49.935: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.73629ms)
Dec 30 11:19:49.939: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.206828ms)
Dec 30 11:19:49.943: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.825089ms)
Dec 30 11:19:49.946: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.359561ms)
Dec 30 11:19:49.950: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.695864ms)
Dec 30 11:19:49.954: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.074251ms)
Dec 30 11:19:49.959: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.932123ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:19:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kqpwh" for this suite.
Dec 30 11:19:55.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:19:56.033: INFO: namespace: e2e-tests-proxy-kqpwh, resource: bindings, ignored listing per whitelist
Dec 30 11:19:56.096: INFO: namespace e2e-tests-proxy-kqpwh deletion completed in 6.13340062s

• [SLOW TEST:6.617 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:19:56.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:19:56.431: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4ef9fb2a-2af6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002257152), BlockOwnerDeletion:(*bool)(0xc002257153)}}
Dec 30 11:19:56.595: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4ef6bdae-2af6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00201b052), BlockOwnerDeletion:(*bool)(0xc00201b053)}}
Dec 30 11:19:56.700: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4ef8407c-2af6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0022572ea), BlockOwnerDeletion:(*bool)(0xc0022572eb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:20:01.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-c29ms" for this suite.
Dec 30 11:20:07.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:20:08.021: INFO: namespace: e2e-tests-gc-c29ms, resource: bindings, ignored listing per whitelist
Dec 30 11:20:08.028: INFO: namespace e2e-tests-gc-c29ms deletion completed in 6.273999882s

• [SLOW TEST:11.931 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:20:08.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 30 11:20:18.832: INFO: Successfully updated pod "pod-update-activedeadlineseconds-560ab85c-2af6-11ea-8970-0242ac110005"
Dec 30 11:20:18.832: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-560ab85c-2af6-11ea-8970-0242ac110005" in namespace "e2e-tests-pods-d2rj5" to be "terminated due to deadline exceeded"
Dec 30 11:20:18.861: INFO: Pod "pod-update-activedeadlineseconds-560ab85c-2af6-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 28.624528ms
Dec 30 11:20:20.876: INFO: Pod "pod-update-activedeadlineseconds-560ab85c-2af6-11ea-8970-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.044486591s
Dec 30 11:20:20.876: INFO: Pod "pod-update-activedeadlineseconds-560ab85c-2af6-11ea-8970-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:20:20.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d2rj5" for this suite.
Dec 30 11:20:26.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:20:27.077: INFO: namespace: e2e-tests-pods-d2rj5, resource: bindings, ignored listing per whitelist
Dec 30 11:20:27.082: INFO: namespace e2e-tests-pods-d2rj5 deletion completed in 6.192296459s

• [SLOW TEST:19.053 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:20:27.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 30 11:20:27.208: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:20:27.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8k827" for this suite.
Dec 30 11:20:33.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:20:33.839: INFO: namespace: e2e-tests-kubectl-8k827, resource: bindings, ignored listing per whitelist
Dec 30 11:20:33.896: INFO: namespace e2e-tests-kubectl-8k827 deletion completed in 6.538943076s

• [SLOW TEST:6.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:20:33.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 30 11:20:34.227: INFO: Waiting up to 5m0s for pod "pod-658a322d-2af6-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-262rl" to be "success or failure"
Dec 30 11:20:34.244: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.68468ms
Dec 30 11:20:36.259: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032153799s
Dec 30 11:20:38.271: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044181096s
Dec 30 11:20:40.286: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058916344s
Dec 30 11:20:42.317: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090397889s
Dec 30 11:20:44.354: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127124197s
STEP: Saw pod success
Dec 30 11:20:44.354: INFO: Pod "pod-658a322d-2af6-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:20:44.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-658a322d-2af6-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:20:44.492: INFO: Waiting for pod pod-658a322d-2af6-11ea-8970-0242ac110005 to disappear
Dec 30 11:20:44.512: INFO: Pod pod-658a322d-2af6-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:20:44.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-262rl" for this suite.
Dec 30 11:20:50.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:20:50.857: INFO: namespace: e2e-tests-emptydir-262rl, resource: bindings, ignored listing per whitelist
Dec 30 11:20:50.906: INFO: namespace e2e-tests-emptydir-262rl deletion completed in 6.382580294s

• [SLOW TEST:17.010 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:20:50.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4cxrt
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4cxrt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4cxrt
Dec 30 11:20:51.146: INFO: Found 0 stateful pods, waiting for 1
Dec 30 11:21:01.163: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 30 11:21:01.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:21:01.883: INFO: stderr: ""
Dec 30 11:21:01.883: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:21:01.883: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 11:21:01.905: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 30 11:21:11.920: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 11:21:11.920: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 11:21:11.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999725s
Dec 30 11:21:12.978: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981538238s
Dec 30 11:21:14.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96501103s
Dec 30 11:21:15.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.752077071s
Dec 30 11:21:16.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.733727285s
Dec 30 11:21:17.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.702545924s
Dec 30 11:21:18.279: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.677736344s
Dec 30 11:21:19.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.663456982s
Dec 30 11:21:20.312: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.648051354s
Dec 30 11:21:21.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 630.731072ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4cxrt
Dec 30 11:21:22.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:21:23.110: INFO: stderr: ""
Dec 30 11:21:23.110: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 11:21:23.110: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 11:21:23.135: INFO: Found 1 stateful pods, waiting for 3
Dec 30 11:21:33.159: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:21:33.159: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:21:33.159: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 11:21:43.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:21:43.161: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:21:43.161: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 30 11:21:43.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:21:43.722: INFO: stderr: ""
Dec 30 11:21:43.722: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:21:43.722: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 11:21:43.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:21:44.418: INFO: stderr: ""
Dec 30 11:21:44.418: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:21:44.418: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 11:21:44.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:21:45.105: INFO: stderr: ""
Dec 30 11:21:45.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:21:45.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 11:21:45.105: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 11:21:45.122: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 30 11:21:55.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 11:21:55.140: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 11:21:55.140: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 11:21:55.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999507s
Dec 30 11:21:56.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.892562977s
Dec 30 11:21:57.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.872373178s
Dec 30 11:21:58.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.858528925s
Dec 30 11:21:59.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.839655709s
Dec 30 11:22:00.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.819300637s
Dec 30 11:22:01.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.796994575s
Dec 30 11:22:02.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.778628077s
Dec 30 11:22:03.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.763331547s
Dec 30 11:22:04.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 729.783012ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4cxrt
Dec 30 11:22:05.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:06.155: INFO: stderr: ""
Dec 30 11:22:06.155: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 11:22:06.155: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 11:22:06.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:06.859: INFO: stderr: ""
Dec 30 11:22:06.859: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 11:22:06.859: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 11:22:06.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:07.222: INFO: rc: 126
Dec 30 11:22:07.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc001d40d50 exit status 126   true [0xc0025e4590 0xc0025e45a8 0xc0025e45c0] [0xc0025e4590 0xc0025e45a8 0xc0025e45c0] [0xc0025e45a0 0xc0025e45b8] [0x935700 0x935700] 0xc001da78c0 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 30 11:22:17.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:17.526: INFO: rc: 1
Dec 30 11:22:17.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001d40e70 exit status 1   true [0xc0025e45c8 0xc0025e45e0 0xc0025e45f8] [0xc0025e45c8 0xc0025e45e0 0xc0025e45f8] [0xc0025e45d8 0xc0025e45f0] [0x935700 0x935700] 0xc001da7da0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 30 11:22:27.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:27.764: INFO: rc: 1
Dec 30 11:22:27.764: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0017352f0 exit status 1   true [0xc000173c00 0xc000173c18 0xc000173c30] [0xc000173c00 0xc000173c18 0xc000173c30] [0xc000173c10 0xc000173c28] [0x935700 0x935700] 0xc001afbc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:22:37.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:37.933: INFO: rc: 1
Dec 30 11:22:37.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002594120 exit status 1   true [0xc002602000 0xc002602018 0xc002602030] [0xc002602000 0xc002602018 0xc002602030] [0xc002602010 0xc002602028] [0x935700 0x935700] 0xc0025d61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:22:47.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:48.093: INFO: rc: 1
Dec 30 11:22:48.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622120 exit status 1   true [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4010 0xc0025e4028] [0x935700 0x935700] 0xc0025e0360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:22:58.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:22:58.250: INFO: rc: 1
Dec 30 11:22:58.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2120 exit status 1   true [0xc000172ba0 0xc000172cc0 0xc000172d50] [0xc000172ba0 0xc000172cc0 0xc000172d50] [0xc000172c50 0xc000172d38] [0x935700 0x935700] 0xc0025dc1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:08.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:08.378: INFO: rc: 1
Dec 30 11:23:08.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2300 exit status 1   true [0xc000172d78 0xc000172dc0 0xc000172e18] [0xc000172d78 0xc000172dc0 0xc000172e18] [0xc000172db0 0xc000172e00] [0x935700 0x935700] 0xc0025dc480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:18.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:18.557: INFO: rc: 1
Dec 30 11:23:18.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016222d0 exit status 1   true [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4048 0xc0025e4060] [0x935700 0x935700] 0xc0025e0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:28.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:28.698: INFO: rc: 1
Dec 30 11:23:28.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00171c120 exit status 1   true [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c1d8 0xc00040c3e0] [0x935700 0x935700] 0xc0022121e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:38.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:38.783: INFO: rc: 1
Dec 30 11:23:38.783: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002594270 exit status 1   true [0xc002602038 0xc002602050 0xc002602068] [0xc002602038 0xc002602050 0xc002602068] [0xc002602048 0xc002602060] [0x935700 0x935700] 0xc0025d67e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:48.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:48.940: INFO: rc: 1
Dec 30 11:23:48.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00171c2a0 exit status 1   true [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c738 0xc00040c8d8] [0x935700 0x935700] 0xc002213980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:23:58.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:23:59.062: INFO: rc: 1
Dec 30 11:23:59.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622450 exit status 1   true [0xc0025e4070 0xc0025e4088 0xc0025e40a0] [0xc0025e4070 0xc0025e4088 0xc0025e40a0] [0xc0025e4080 0xc0025e4098] [0x935700 0x935700] 0xc0025e08a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:09.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:24:09.250: INFO: rc: 1
Dec 30 11:24:09.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d25d0 exit status 1   true [0xc000172e28 0xc000172ea0 0xc000172ee0] [0xc000172e28 0xc000172ea0 0xc000172ee0] [0xc000172e68 0xc000172ed0] [0x935700 0x935700] 0xc0025dc720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:19.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:24:19.400: INFO: rc: 1
Dec 30 11:24:19.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2720 exit status 1   true [0xc000172f08 0xc000172f38 0xc000172f80] [0xc000172f08 0xc000172f38 0xc000172f80] [0xc000172f28 0xc000172f68] [0x935700 0x935700] 0xc0025dc9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:29.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:24:29.570: INFO: rc: 1
Dec 30 11:24:29.570: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622570 exit status 1   true [0xc0025e40a8 0xc0025e40c0 0xc0025e40d8] [0xc0025e40a8 0xc0025e40c0 0xc0025e40d8] [0xc0025e40b8 0xc0025e40d0] [0x935700 0x935700] 0xc0025e0b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:39.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:24:39.745: INFO: rc: 1
Dec 30 11:24:39.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d21e0 exit status 1   true [0xc000172ba0 0xc000172cc0 0xc000172d50] [0xc000172ba0 0xc000172cc0 0xc000172d50] [0xc000172c50 0xc000172d38] [0x935700 0x935700] 0xc0025dc1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:49.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:24:49.863: INFO: rc: 1
Dec 30 11:24:49.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2330 exit status 1   true [0xc000172d78 0xc000172dc0 0xc000172e18] [0xc000172d78 0xc000172dc0 0xc000172e18] [0xc000172db0 0xc000172e00] [0x935700 0x935700] 0xc0025dc480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:24:59.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:00.127: INFO: rc: 1
Dec 30 11:25:00.127: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2660 exit status 1   true [0xc000172e28 0xc000172ea0 0xc000172ee0] [0xc000172e28 0xc000172ea0 0xc000172ee0] [0xc000172e68 0xc000172ed0] [0x935700 0x935700] 0xc0025dc720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:25:10.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:10.272: INFO: rc: 1
Dec 30 11:25:10.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002594150 exit status 1   true [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4010 0xc0025e4028] [0x935700 0x935700] 0xc0025e0360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:25:20.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:20.442: INFO: rc: 1
Dec 30 11:25:20.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622180 exit status 1   true [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c1d8 0xc00040c3e0] [0x935700 0x935700] 0xc0022121e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:25:30.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:30.619: INFO: rc: 1
Dec 30 11:25:30.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00171c150 exit status 1   true [0xc002602000 0xc002602018 0xc002602030] [0xc002602000 0xc002602018 0xc002602030] [0xc002602010 0xc002602028] [0x935700 0x935700] 0xc0025d61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:25:40.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:40.748: INFO: rc: 1
Dec 30 11:25:40.749: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2840 exit status 1   true [0xc000172f08 0xc000172f38 0xc000172f80] [0xc000172f08 0xc000172f38 0xc000172f80] [0xc000172f28 0xc000172f68] [0x935700 0x935700] 0xc0025dc9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:25:50.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:25:50.874: INFO: rc: 1
Dec 30 11:25:50.874: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d29c0 exit status 1   true [0xc000172f90 0xc000172fe8 0xc000173040] [0xc000172f90 0xc000172fe8 0xc000173040] [0xc000172fc8 0xc000173028] [0x935700 0x935700] 0xc0025dcc60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:00.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:00.972: INFO: rc: 1
Dec 30 11:26:00.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622330 exit status 1   true [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c738 0xc00040c8d8] [0x935700 0x935700] 0xc002213980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:10.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:11.108: INFO: rc: 1
Dec 30 11:26:11.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2ae0 exit status 1   true [0xc000173060 0xc0001730b8 0xc000173128] [0xc000173060 0xc0001730b8 0xc000173128] [0xc000173090 0xc000173108] [0x935700 0x935700] 0xc0025dcf00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:21.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:21.229: INFO: rc: 1
Dec 30 11:26:21.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002594300 exit status 1   true [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4048 0xc0025e4060] [0x935700 0x935700] 0xc0025e0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:31.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:31.343: INFO: rc: 1
Dec 30 11:26:31.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00171c270 exit status 1   true [0xc002602038 0xc002602050 0xc002602068] [0xc002602038 0xc002602050 0xc002602068] [0xc002602048 0xc002602060] [0x935700 0x935700] 0xc0025d67e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:41.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:41.500: INFO: rc: 1
Dec 30 11:26:41.501: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001622120 exit status 1   true [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c1d8 0xc00040c3e0] [0x935700 0x935700] 0xc0022121e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:26:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:26:51.650: INFO: rc: 1
Dec 30 11:26:51.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2120 exit status 1   true [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4000 0xc0025e4018 0xc0025e4030] [0xc0025e4010 0xc0025e4028] [0x935700 0x935700] 0xc0025e0360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:27:01.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:27:01.866: INFO: rc: 1
Dec 30 11:27:01.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010d2300 exit status 1   true [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4038 0xc0025e4050 0xc0025e4068] [0xc0025e4048 0xc0025e4060] [0x935700 0x935700] 0xc0025e0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 30 11:27:11.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4cxrt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:27:12.028: INFO: rc: 1
Dec 30 11:27:12.029: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 30 11:27:12.029: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 30 11:27:12.059: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4cxrt
Dec 30 11:27:12.064: INFO: Scaling statefulset ss to 0
Dec 30 11:27:12.076: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 11:27:12.079: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:27:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4cxrt" for this suite.
Dec 30 11:27:20.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:27:20.415: INFO: namespace: e2e-tests-statefulset-4cxrt, resource: bindings, ignored listing per whitelist
Dec 30 11:27:20.428: INFO: namespace e2e-tests-statefulset-4cxrt deletion completed in 8.224739406s

• [SLOW TEST:389.521 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:27:20.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-57df2a7c-2af7-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 11:27:20.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-6c9px" to be "success or failure"
Dec 30 11:27:20.802: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.46772ms
Dec 30 11:27:22.847: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05253636s
Dec 30 11:27:24.866: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071764133s
Dec 30 11:27:26.975: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180978477s
Dec 30 11:27:28.998: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20358641s
Dec 30 11:27:31.028: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.233148061s
STEP: Saw pod success
Dec 30 11:27:31.028: INFO: Pod "pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:27:31.040: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 11:27:31.158: INFO: Waiting for pod pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005 to disappear
Dec 30 11:27:31.291: INFO: Pod pod-configmaps-57e01eb9-2af7-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:27:31.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6c9px" for this suite.
Dec 30 11:27:37.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:27:37.494: INFO: namespace: e2e-tests-configmap-6c9px, resource: bindings, ignored listing per whitelist
Dec 30 11:27:37.677: INFO: namespace e2e-tests-configmap-6c9px deletion completed in 6.368691785s

• [SLOW TEST:17.248 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:27:37.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-l86xz
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 30 11:27:38.128: INFO: Found 0 stateful pods, waiting for 3
Dec 30 11:27:48.219: INFO: Found 2 stateful pods, waiting for 3
Dec 30 11:27:58.156: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:27:58.156: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:27:58.156: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 11:28:08.149: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:28:08.149: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:28:08.149: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 11:28:08.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l86xz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:28:09.103: INFO: stderr: ""
Dec 30 11:28:09.103: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:28:09.103: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 30 11:28:09.145: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 30 11:28:19.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l86xz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:28:19.899: INFO: stderr: ""
Dec 30 11:28:19.899: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 11:28:19.899: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 11:28:29.970: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:28:29.970: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:29.970: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:29.970: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:39.985: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:28:39.985: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:39.985: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:49.985: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:28:49.985: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:28:49.985: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:29:00.795: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:29:00.795: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:29:10.002: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:29:10.002: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 11:29:20.717: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 30 11:29:30.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l86xz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 11:29:30.752: INFO: stderr: ""
Dec 30 11:29:30.752: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 11:29:30.752: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 11:29:40.841: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 30 11:29:50.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l86xz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 11:29:51.667: INFO: stderr: ""
Dec 30 11:29:51.667: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 11:29:51.667: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 11:30:01.726: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:30:01.726: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:01.726: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:01.726: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:11.863: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:30:11.863: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:11.863: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:21.759: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:30:21.759: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:21.759: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:31.768: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
Dec 30 11:30:31.768: INFO: Waiting for Pod e2e-tests-statefulset-l86xz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 11:30:41.747: INFO: Waiting for StatefulSet e2e-tests-statefulset-l86xz/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 30 11:30:51.756: INFO: Deleting all statefulset in ns e2e-tests-statefulset-l86xz
Dec 30 11:30:51.763: INFO: Scaling statefulset ss2 to 0
Dec 30 11:31:21.828: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 11:31:21.834: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:31:21.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-l86xz" for this suite.
Dec 30 11:31:30.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:31:30.098: INFO: namespace: e2e-tests-statefulset-l86xz, resource: bindings, ignored listing per whitelist
Dec 30 11:31:30.185: INFO: namespace e2e-tests-statefulset-l86xz deletion completed in 8.261106414s

• [SLOW TEST:232.508 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:31:30.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 30 11:31:30.516: INFO: Waiting up to 5m0s for pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005" in namespace "e2e-tests-containers-kmhh7" to be "success or failure"
Dec 30 11:31:30.602: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.539035ms
Dec 30 11:31:32.671: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154894783s
Dec 30 11:31:34.681: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164738226s
Dec 30 11:31:36.743: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226376392s
Dec 30 11:31:38.754: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238069384s
Dec 30 11:31:40.806: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289134469s
STEP: Saw pod success
Dec 30 11:31:40.806: INFO: Pod "client-containers-ecb10313-2af7-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:31:40.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ecb10313-2af7-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:31:41.176: INFO: Waiting for pod client-containers-ecb10313-2af7-11ea-8970-0242ac110005 to disappear
Dec 30 11:31:41.201: INFO: Pod client-containers-ecb10313-2af7-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:31:41.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-kmhh7" for this suite.
Dec 30 11:31:47.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:31:47.504: INFO: namespace: e2e-tests-containers-kmhh7, resource: bindings, ignored listing per whitelist
Dec 30 11:31:47.533: INFO: namespace e2e-tests-containers-kmhh7 deletion completed in 6.310365131s

• [SLOW TEST:17.348 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:31:47.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f6f27bbf-2af7-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 11:31:47.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-hbrws" to be "success or failure"
Dec 30 11:31:47.761: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.555088ms
Dec 30 11:31:49.854: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099166988s
Dec 30 11:31:51.887: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132182992s
Dec 30 11:31:53.905: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15039089s
Dec 30 11:31:55.928: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173001322s
Dec 30 11:31:58.061: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.306410737s
STEP: Saw pod success
Dec 30 11:31:58.062: INFO: Pod "pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:31:58.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 11:31:58.248: INFO: Waiting for pod pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005 to disappear
Dec 30 11:31:58.254: INFO: Pod pod-configmaps-f6fd76d7-2af7-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:31:58.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hbrws" for this suite.
Dec 30 11:32:04.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:32:04.596: INFO: namespace: e2e-tests-configmap-hbrws, resource: bindings, ignored listing per whitelist
Dec 30 11:32:04.625: INFO: namespace e2e-tests-configmap-hbrws deletion completed in 6.356277486s

• [SLOW TEST:17.092 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:32:04.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 30 11:32:14.943: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-012bb6bd-2af8-11ea-8970-0242ac110005,GenerateName:,Namespace:e2e-tests-events-h89br,SelfLink:/api/v1/namespaces/e2e-tests-events-h89br/pods/send-events-012bb6bd-2af8-11ea-8970-0242ac110005,UID:012cebe8-2af8-11ea-a994-fa163e34d433,ResourceVersion:16565345,Generation:0,CreationTimestamp:2019-12-30 11:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 811953669,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bvmdl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bvmdl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bvmdl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000da43d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000da4450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 11:32:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 11:32:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 11:32:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 11:32:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-30 11:32:04 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-30 11:32:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://7950df393ad9f007f3ad9dbd94cc025063df007c870b97df1ec9214db255cacc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 30 11:32:16.959: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 30 11:32:19.004: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:32:19.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-h89br" for this suite.
Dec 30 11:33:05.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:33:05.373: INFO: namespace: e2e-tests-events-h89br, resource: bindings, ignored listing per whitelist
Dec 30 11:33:05.376: INFO: namespace e2e-tests-events-h89br deletion completed in 46.300365877s

• [SLOW TEST:60.751 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:33:05.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-99vvw
Dec 30 11:33:15.889: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-99vvw
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 11:33:15.896: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:37:17.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-99vvw" for this suite.
Dec 30 11:37:23.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:37:23.702: INFO: namespace: e2e-tests-container-probe-99vvw, resource: bindings, ignored listing per whitelist
Dec 30 11:37:23.804: INFO: namespace e2e-tests-container-probe-99vvw deletion completed in 6.254145259s

• [SLOW TEST:258.428 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:37:23.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:37:24.137: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 30 11:37:24.208: INFO: Number of nodes with available pods: 0
Dec 30 11:37:24.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:25.604: INFO: Number of nodes with available pods: 0
Dec 30 11:37:25.604: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:26.246: INFO: Number of nodes with available pods: 0
Dec 30 11:37:26.247: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:27.333: INFO: Number of nodes with available pods: 0
Dec 30 11:37:27.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:28.265: INFO: Number of nodes with available pods: 0
Dec 30 11:37:28.265: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:29.468: INFO: Number of nodes with available pods: 0
Dec 30 11:37:29.468: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:30.252: INFO: Number of nodes with available pods: 0
Dec 30 11:37:30.252: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:31.239: INFO: Number of nodes with available pods: 0
Dec 30 11:37:31.239: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:32.264: INFO: Number of nodes with available pods: 0
Dec 30 11:37:32.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:33.268: INFO: Number of nodes with available pods: 0
Dec 30 11:37:33.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:34.236: INFO: Number of nodes with available pods: 1
Dec 30 11:37:34.236: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 30 11:37:34.383: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:35.416: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:36.413: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:37.422: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:38.556: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:39.485: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:40.538: INFO: Wrong image for pod: daemon-set-rgp8n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 11:37:40.538: INFO: Pod daemon-set-rgp8n is not available
Dec 30 11:37:41.408: INFO: Pod daemon-set-rcflv is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 30 11:37:41.434: INFO: Number of nodes with available pods: 0
Dec 30 11:37:41.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:42.558: INFO: Number of nodes with available pods: 0
Dec 30 11:37:42.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:43.474: INFO: Number of nodes with available pods: 0
Dec 30 11:37:43.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:44.457: INFO: Number of nodes with available pods: 0
Dec 30 11:37:44.457: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:45.991: INFO: Number of nodes with available pods: 0
Dec 30 11:37:45.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:46.558: INFO: Number of nodes with available pods: 0
Dec 30 11:37:46.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:47.480: INFO: Number of nodes with available pods: 0
Dec 30 11:37:47.480: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:48.471: INFO: Number of nodes with available pods: 0
Dec 30 11:37:48.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 11:37:49.452: INFO: Number of nodes with available pods: 1
Dec 30 11:37:49.452: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cb4w9, will wait for the garbage collector to delete the pods
Dec 30 11:37:49.552: INFO: Deleting DaemonSet.extensions daemon-set took: 12.604029ms
Dec 30 11:37:49.652: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.233412ms
Dec 30 11:38:02.677: INFO: Number of nodes with available pods: 0
Dec 30 11:38:02.677: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 11:38:02.681: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cb4w9/daemonsets","resourceVersion":"16565839"},"items":null}

Dec 30 11:38:02.685: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cb4w9/pods","resourceVersion":"16565839"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:38:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-cb4w9" for this suite.
Dec 30 11:38:08.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:38:08.924: INFO: namespace: e2e-tests-daemonsets-cb4w9, resource: bindings, ignored listing per whitelist
Dec 30 11:38:09.048: INFO: namespace e2e-tests-daemonsets-cb4w9 deletion completed in 6.349678341s

• [SLOW TEST:45.244 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:38:09.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 30 11:38:09.290: INFO: Waiting up to 5m0s for pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005" in namespace "e2e-tests-var-expansion-sn5x6" to be "success or failure"
Dec 30 11:38:09.385: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.716491ms
Dec 30 11:38:11.395: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105496728s
Dec 30 11:38:13.422: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132382267s
Dec 30 11:38:15.544: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25423652s
Dec 30 11:38:17.572: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282166118s
Dec 30 11:38:19.589: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.299004803s
STEP: Saw pod success
Dec 30 11:38:19.589: INFO: Pod "var-expansion-da671f88-2af8-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:38:19.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-da671f88-2af8-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 11:38:20.404: INFO: Waiting for pod var-expansion-da671f88-2af8-11ea-8970-0242ac110005 to disappear
Dec 30 11:38:20.443: INFO: Pod var-expansion-da671f88-2af8-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:38:20.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-sn5x6" for this suite.
Dec 30 11:38:28.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:38:28.687: INFO: namespace: e2e-tests-var-expansion-sn5x6, resource: bindings, ignored listing per whitelist
Dec 30 11:38:28.839: INFO: namespace e2e-tests-var-expansion-sn5x6 deletion completed in 8.367092293s

• [SLOW TEST:19.790 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:38:28.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-lndz6 in namespace e2e-tests-proxy-vtkj7
I1230 11:38:29.219940       8 runners.go:184] Created replication controller with name: proxy-service-lndz6, namespace: e2e-tests-proxy-vtkj7, replica count: 1
I1230 11:38:30.270807       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:31.271050       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:32.271322       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:33.271675       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:34.272091       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:35.272412       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:36.272622       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:37.272834       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 11:38:38.273295       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1230 11:38:39.273553       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1230 11:38:40.273860       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1230 11:38:41.274177       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1230 11:38:42.274473       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1230 11:38:43.274755       8 runners.go:184] proxy-service-lndz6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 30 11:38:43.290: INFO: setup took 14.264708855s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 30 11:38:43.325: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vtkj7/pods/http:proxy-service-lndz6-fhq9x:160/proxy/: foo (200; 35.34044ms)
Dec 30 11:38:43.326: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vtkj7/pods/proxy-service-lndz6-fhq9x:160/proxy/: foo (200; 35.507736ms)
Dec 30 11:38:43.330: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vtkj7/pods/proxy-service-lndz6-fhq9x:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f8194856-2af8-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 11:38:59.129: INFO: Waiting up to 5m0s for pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-xbf4f" to be "success or failure"
Dec 30 11:38:59.187: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.194699ms
Dec 30 11:39:01.258: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129453639s
Dec 30 11:39:03.289: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160200086s
Dec 30 11:39:05.839: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.710297124s
Dec 30 11:39:07.866: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737008456s
Dec 30 11:39:09.886: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757725734s
Dec 30 11:39:11.901: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.771777984s
STEP: Saw pod success
Dec 30 11:39:11.901: INFO: Pod "pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:39:11.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 11:39:12.284: INFO: Waiting for pod pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005 to disappear
Dec 30 11:39:12.323: INFO: Pod pod-configmaps-f81d1a5c-2af8-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:39:12.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xbf4f" for this suite.
Dec 30 11:39:18.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:39:18.836: INFO: namespace: e2e-tests-configmap-xbf4f, resource: bindings, ignored listing per whitelist
Dec 30 11:39:18.934: INFO: namespace e2e-tests-configmap-xbf4f deletion completed in 6.600176894s

• [SLOW TEST:19.976 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:39:18.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 11:39:19.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-49qcj" to be "success or failure"
Dec 30 11:39:19.328: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 182.794998ms
Dec 30 11:39:21.342: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196582625s
Dec 30 11:39:23.351: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20582975s
Dec 30 11:39:25.378: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232401104s
Dec 30 11:39:27.750: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604228432s
Dec 30 11:39:30.502: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.356541226s
STEP: Saw pod success
Dec 30 11:39:30.502: INFO: Pod "downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:39:30.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 11:39:30.793: INFO: Waiting for pod downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005 to disappear
Dec 30 11:39:30.800: INFO: Pod downwardapi-volume-040b57a5-2af9-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:39:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-49qcj" for this suite.
Dec 30 11:39:38.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:39:38.982: INFO: namespace: e2e-tests-downward-api-49qcj, resource: bindings, ignored listing per whitelist
Dec 30 11:39:39.123: INFO: namespace e2e-tests-downward-api-49qcj deletion completed in 8.315614801s

• [SLOW TEST:20.189 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:39:39.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 30 11:39:39.263: INFO: Waiting up to 5m0s for pod "pod-100a578a-2af9-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-tjrmr" to be "success or failure"
Dec 30 11:39:39.338: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.856224ms
Dec 30 11:39:41.863: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599285169s
Dec 30 11:39:43.883: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.619036887s
Dec 30 11:39:45.892: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.628117314s
Dec 30 11:39:47.902: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638359257s
Dec 30 11:39:49.997: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733248879s
STEP: Saw pod success
Dec 30 11:39:49.997: INFO: Pod "pod-100a578a-2af9-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:39:50.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-100a578a-2af9-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:39:50.551: INFO: Waiting for pod pod-100a578a-2af9-11ea-8970-0242ac110005 to disappear
Dec 30 11:39:50.578: INFO: Pod pod-100a578a-2af9-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:39:50.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tjrmr" for this suite.
Dec 30 11:39:56.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:39:56.886: INFO: namespace: e2e-tests-emptydir-tjrmr, resource: bindings, ignored listing per whitelist
Dec 30 11:39:57.189: INFO: namespace e2e-tests-emptydir-tjrmr deletion completed in 6.565777145s

• [SLOW TEST:18.065 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:39:57.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 30 11:40:08.200: INFO: Successfully updated pod "labelsupdate1ae1a9c2-2af9-11ea-8970-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:40:10.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nswmc" for this suite.
Dec 30 11:40:34.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:40:34.663: INFO: namespace: e2e-tests-downward-api-nswmc, resource: bindings, ignored listing per whitelist
Dec 30 11:40:34.701: INFO: namespace e2e-tests-downward-api-nswmc deletion completed in 24.269673199s

• [SLOW TEST:37.512 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:40:34.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 11:40:34.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nmhx4'
Dec 30 11:40:37.141: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 11:40:37.141: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 30 11:40:37.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-nmhx4'
Dec 30 11:40:37.520: INFO: stderr: ""
Dec 30 11:40:37.520: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:40:37.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nmhx4" for this suite.
Dec 30 11:40:57.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:40:58.092: INFO: namespace: e2e-tests-kubectl-nmhx4, resource: bindings, ignored listing per whitelist
Dec 30 11:40:58.212: INFO: namespace e2e-tests-kubectl-nmhx4 deletion completed in 20.442340719s

• [SLOW TEST:23.511 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:40:58.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005
Dec 30 11:40:58.449: INFO: Pod name my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005: Found 0 pods out of 1
Dec 30 11:41:04.037: INFO: Pod name my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005: Found 1 pods out of 1
Dec 30 11:41:04.037: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005" are running
Dec 30 11:41:08.077: INFO: Pod "my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005-kkt75" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:40:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:40:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:40:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:40:58 +0000 UTC Reason: Message:}])
Dec 30 11:41:08.078: INFO: Trying to dial the pod
Dec 30 11:41:13.207: INFO: Controller my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005: Got expected result from replica 1 [my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005-kkt75]: "my-hostname-basic-3f38611e-2af9-11ea-8970-0242ac110005-kkt75", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:41:13.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-hd44s" for this suite.
Dec 30 11:41:22.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:41:22.111: INFO: namespace: e2e-tests-replication-controller-hd44s, resource: bindings, ignored listing per whitelist
Dec 30 11:41:22.218: INFO: namespace e2e-tests-replication-controller-hd44s deletion completed in 9.003485498s

• [SLOW TEST:24.006 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:41:22.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:41:22.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 30 11:41:22.714: INFO: stderr: ""
Dec 30 11:41:22.714: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:41:22.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s2m6r" for this suite.
Dec 30 11:41:28.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:41:28.874: INFO: namespace: e2e-tests-kubectl-s2m6r, resource: bindings, ignored listing per whitelist
Dec 30 11:41:28.942: INFO: namespace e2e-tests-kubectl-s2m6r deletion completed in 6.21470852s

• [SLOW TEST:6.723 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:41:28.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 30 11:41:29.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5vdqs'
Dec 30 11:41:29.644: INFO: stderr: ""
Dec 30 11:41:29.644: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 30 11:41:30.660: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:30.660: INFO: Found 0 / 1
Dec 30 11:41:31.763: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:31.763: INFO: Found 0 / 1
Dec 30 11:41:32.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:32.665: INFO: Found 0 / 1
Dec 30 11:41:33.662: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:33.662: INFO: Found 0 / 1
Dec 30 11:41:35.012: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:35.013: INFO: Found 0 / 1
Dec 30 11:41:35.876: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:35.876: INFO: Found 0 / 1
Dec 30 11:41:36.674: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:36.674: INFO: Found 0 / 1
Dec 30 11:41:37.660: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:37.660: INFO: Found 0 / 1
Dec 30 11:41:38.660: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:38.660: INFO: Found 1 / 1
Dec 30 11:41:38.660: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 30 11:41:38.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:38.665: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 30 11:41:38.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tfscx --namespace=e2e-tests-kubectl-5vdqs -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 30 11:41:38.803: INFO: stderr: ""
Dec 30 11:41:38.803: INFO: stdout: "pod/redis-master-tfscx patched\n"
STEP: checking annotations
Dec 30 11:41:38.810: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 11:41:38.810: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:41:38.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5vdqs" for this suite.
Dec 30 11:42:02.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:42:03.085: INFO: namespace: e2e-tests-kubectl-5vdqs, resource: bindings, ignored listing per whitelist
Dec 30 11:42:03.119: INFO: namespace e2e-tests-kubectl-5vdqs deletion completed in 24.304547706s

• [SLOW TEST:34.177 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:42:03.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:42:03.380: INFO: Creating ReplicaSet my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005
Dec 30 11:42:03.652: INFO: Pod name my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005: Found 0 pods out of 1
Dec 30 11:42:08.698: INFO: Pod name my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005: Found 1 pods out of 1
Dec 30 11:42:08.698: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005" is running
Dec 30 11:42:14.733: INFO: Pod "my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005-zrb8b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:42:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:42:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:42:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 11:42:03 +0000 UTC Reason: Message:}])
Dec 30 11:42:14.734: INFO: Trying to dial the pod
Dec 30 11:42:19.845: INFO: Controller my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005: Got expected result from replica 1 [my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005-zrb8b]: "my-hostname-basic-65f1fcb9-2af9-11ea-8970-0242ac110005-zrb8b", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:42:19.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-t2xnb" for this suite.
Dec 30 11:42:25.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:42:25.991: INFO: namespace: e2e-tests-replicaset-t2xnb, resource: bindings, ignored listing per whitelist
Dec 30 11:42:26.107: INFO: namespace e2e-tests-replicaset-t2xnb deletion completed in 6.243988791s

• [SLOW TEST:22.988 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:42:26.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 30 11:42:26.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jnc4q run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 30 11:42:38.023: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 30 11:42:38.023: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:42:40.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jnc4q" for this suite.
Dec 30 11:42:46.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:42:47.033: INFO: namespace: e2e-tests-kubectl-jnc4q, resource: bindings, ignored listing per whitelist
Dec 30 11:42:47.043: INFO: namespace e2e-tests-kubectl-jnc4q deletion completed in 6.637580161s

• [SLOW TEST:20.936 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:42:47.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ql5wc
Dec 30 11:42:57.392: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ql5wc
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 11:42:57.398: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:46:58.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ql5wc" for this suite.
Dec 30 11:47:06.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:47:07.010: INFO: namespace: e2e-tests-container-probe-ql5wc, resource: bindings, ignored listing per whitelist
Dec 30 11:47:07.135: INFO: namespace e2e-tests-container-probe-ql5wc deletion completed in 8.252540442s

• [SLOW TEST:260.092 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:47:07.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 30 11:47:07.300: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 30 11:47:07.362: INFO: Waiting for terminating namespaces to be deleted...
Dec 30 11:47:07.367: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 30 11:47:07.388: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 11:47:07.388: INFO: 	Container coredns ready: true, restart count 0
Dec 30 11:47:07.388: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:47:07.388: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:47:07.388: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:47:07.388: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 11:47:07.388: INFO: 	Container coredns ready: true, restart count 0
Dec 30 11:47:07.388: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 30 11:47:07.388: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 11:47:07.388: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:47:07.388: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 30 11:47:07.388: INFO: 	Container weave ready: true, restart count 0
Dec 30 11:47:07.388: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 30 11:47:07.447: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b2ef525-2afa-11ea-8970-0242ac110005.15e523a306a7e858], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-gvbcf/filler-pod-1b2ef525-2afa-11ea-8970-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b2ef525-2afa-11ea-8970-0242ac110005.15e523a411e26cab], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b2ef525-2afa-11ea-8970-0242ac110005.15e523a4caf97893], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b2ef525-2afa-11ea-8970-0242ac110005.15e523a4f3e734a4], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e523a55b3f2b70], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:47:18.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-gvbcf" for this suite.
Dec 30 11:47:26.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:47:26.904: INFO: namespace: e2e-tests-sched-pred-gvbcf, resource: bindings, ignored listing per whitelist
Dec 30 11:47:27.100: INFO: namespace e2e-tests-sched-pred-gvbcf deletion completed in 8.409797238s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.964 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:47:27.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 30 11:47:27.360: INFO: Waiting up to 5m0s for pod "pod-270bac4a-2afa-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-qmmf5" to be "success or failure"
Dec 30 11:47:27.385: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.828526ms
Dec 30 11:47:29.471: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11049001s
Dec 30 11:47:31.489: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128826989s
Dec 30 11:47:33.569: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208840345s
Dec 30 11:47:35.605: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244791865s
Dec 30 11:47:37.631: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270760934s
STEP: Saw pod success
Dec 30 11:47:37.631: INFO: Pod "pod-270bac4a-2afa-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:47:37.639: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-270bac4a-2afa-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:47:38.397: INFO: Waiting for pod pod-270bac4a-2afa-11ea-8970-0242ac110005 to disappear
Dec 30 11:47:38.408: INFO: Pod pod-270bac4a-2afa-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:47:38.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qmmf5" for this suite.
Dec 30 11:47:44.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:47:44.995: INFO: namespace: e2e-tests-emptydir-qmmf5, resource: bindings, ignored listing per whitelist
Dec 30 11:47:45.042: INFO: namespace e2e-tests-emptydir-qmmf5 deletion completed in 6.626378638s

• [SLOW TEST:17.942 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:47:45.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 30 11:47:45.445: INFO: Waiting up to 5m0s for pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-tzbrq" to be "success or failure"
Dec 30 11:47:45.470: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.970838ms
Dec 30 11:47:47.671: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225621453s
Dec 30 11:47:49.688: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242478927s
Dec 30 11:47:52.130: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.684330424s
Dec 30 11:47:54.147: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.70181485s
Dec 30 11:47:56.165: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719565041s
Dec 30 11:47:58.194: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.748479423s
STEP: Saw pod success
Dec 30 11:47:58.194: INFO: Pod "pod-31c5bdbe-2afa-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:47:58.210: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-31c5bdbe-2afa-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 11:47:58.368: INFO: Waiting for pod pod-31c5bdbe-2afa-11ea-8970-0242ac110005 to disappear
Dec 30 11:47:58.429: INFO: Pod pod-31c5bdbe-2afa-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:47:58.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tzbrq" for this suite.
Dec 30 11:48:06.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:48:06.604: INFO: namespace: e2e-tests-emptydir-tzbrq, resource: bindings, ignored listing per whitelist
Dec 30 11:48:06.762: INFO: namespace e2e-tests-emptydir-tzbrq deletion completed in 8.323984403s

• [SLOW TEST:21.720 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:48:06.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:48:07.065: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 30 11:48:07.082: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2tcjs/daemonsets","resourceVersion":"16567004"},"items":null}

Dec 30 11:48:07.088: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2tcjs/pods","resourceVersion":"16567004"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:48:07.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2tcjs" for this suite.
Dec 30 11:48:13.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:48:13.287: INFO: namespace: e2e-tests-daemonsets-2tcjs, resource: bindings, ignored listing per whitelist
Dec 30 11:48:13.563: INFO: namespace e2e-tests-daemonsets-2tcjs deletion completed in 6.380572289s

S [SKIPPING] [6.800 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 30 11:48:07.065: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:48:13.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-42b014f4-2afa-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 11:48:13.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-8wjgk" to be "success or failure"
Dec 30 11:48:13.897: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.867968ms
Dec 30 11:48:15.949: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096500971s
Dec 30 11:48:17.970: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116804338s
Dec 30 11:48:20.087: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233913314s
Dec 30 11:48:22.123: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270451317s
Dec 30 11:48:24.141: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.287822641s
Dec 30 11:48:26.428: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.574693161s
STEP: Saw pod success
Dec 30 11:48:26.428: INFO: Pod "pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:48:26.452: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 11:48:26.728: INFO: Waiting for pod pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005 to disappear
Dec 30 11:48:26.740: INFO: Pod pod-projected-secrets-42b2bf86-2afa-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:48:26.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8wjgk" for this suite.
Dec 30 11:48:32.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:48:32.854: INFO: namespace: e2e-tests-projected-8wjgk, resource: bindings, ignored listing per whitelist
Dec 30 11:48:32.914: INFO: namespace e2e-tests-projected-8wjgk deletion completed in 6.165106454s

• [SLOW TEST:19.351 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:48:32.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:48:44.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qt552" for this suite.
Dec 30 11:49:08.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:49:08.551: INFO: namespace: e2e-tests-replication-controller-qt552, resource: bindings, ignored listing per whitelist
Dec 30 11:49:08.570: INFO: namespace e2e-tests-replication-controller-qt552 deletion completed in 24.36440806s

• [SLOW TEST:35.656 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:49:08.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-b5vjt
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 30 11:49:08.699: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 30 11:49:41.007: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-b5vjt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 11:49:41.007: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 11:49:42.644: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:49:42.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-b5vjt" for this suite.
Dec 30 11:50:06.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:50:06.846: INFO: namespace: e2e-tests-pod-network-test-b5vjt, resource: bindings, ignored listing per whitelist
Dec 30 11:50:06.848: INFO: namespace e2e-tests-pod-network-test-b5vjt deletion completed in 24.180291974s

• [SLOW TEST:58.278 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:50:06.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 30 11:50:07.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:07.477: INFO: stderr: ""
Dec 30 11:50:07.477: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 11:50:07.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:07.760: INFO: stderr: ""
Dec 30 11:50:07.760: INFO: stdout: "update-demo-nautilus-98mkb update-demo-nautilus-sfpg2 "
Dec 30 11:50:07.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98mkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:07.899: INFO: stderr: ""
Dec 30 11:50:07.899: INFO: stdout: ""
Dec 30 11:50:07.899: INFO: update-demo-nautilus-98mkb is created but not running
Dec 30 11:50:12.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:13.125: INFO: stderr: ""
Dec 30 11:50:13.125: INFO: stdout: "update-demo-nautilus-98mkb update-demo-nautilus-sfpg2 "
Dec 30 11:50:13.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98mkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:13.194: INFO: stderr: ""
Dec 30 11:50:13.194: INFO: stdout: ""
Dec 30 11:50:13.194: INFO: update-demo-nautilus-98mkb is created but not running
Dec 30 11:50:18.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:18.398: INFO: stderr: ""
Dec 30 11:50:18.398: INFO: stdout: "update-demo-nautilus-98mkb update-demo-nautilus-sfpg2 "
Dec 30 11:50:18.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98mkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:18.688: INFO: stderr: ""
Dec 30 11:50:18.688: INFO: stdout: ""
Dec 30 11:50:18.688: INFO: update-demo-nautilus-98mkb is created but not running
Dec 30 11:50:23.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:23.872: INFO: stderr: ""
Dec 30 11:50:23.872: INFO: stdout: "update-demo-nautilus-98mkb update-demo-nautilus-sfpg2 "
Dec 30 11:50:23.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98mkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:24.196: INFO: stderr: ""
Dec 30 11:50:24.196: INFO: stdout: "true"
Dec 30 11:50:24.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-98mkb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:24.317: INFO: stderr: ""
Dec 30 11:50:24.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:50:24.317: INFO: validating pod update-demo-nautilus-98mkb
Dec 30 11:50:24.332: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:50:24.332: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:50:24.332: INFO: update-demo-nautilus-98mkb is verified up and running
Dec 30 11:50:24.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfpg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:24.436: INFO: stderr: ""
Dec 30 11:50:24.436: INFO: stdout: "true"
Dec 30 11:50:24.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfpg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:24.593: INFO: stderr: ""
Dec 30 11:50:24.593: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 11:50:24.593: INFO: validating pod update-demo-nautilus-sfpg2
Dec 30 11:50:24.648: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 11:50:24.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 11:50:24.648: INFO: update-demo-nautilus-sfpg2 is verified up and running
STEP: using delete to clean up resources
Dec 30 11:50:24.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:24.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 11:50:24.852: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 30 11:50:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-69nh9'
Dec 30 11:50:25.077: INFO: stderr: "No resources found.\n"
Dec 30 11:50:25.077: INFO: stdout: ""
Dec 30 11:50:25.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-69nh9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 11:50:25.224: INFO: stderr: ""
Dec 30 11:50:25.224: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:50:25.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-69nh9" for this suite.
Dec 30 11:50:49.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:50:49.500: INFO: namespace: e2e-tests-kubectl-69nh9, resource: bindings, ignored listing per whitelist
Dec 30 11:50:49.521: INFO: namespace e2e-tests-kubectl-69nh9 deletion completed in 24.279057639s

• [SLOW TEST:42.672 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:50:49.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 30 11:51:09.803: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:09.909: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:11.909: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:12.170: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:13.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:14.057: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:15.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:15.927: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:17.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:17.929: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:19.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:19.922: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:21.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:21.925: INFO: Pod pod-with-prestop-http-hook still exists
Dec 30 11:51:23.910: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 30 11:51:24.003: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:51:24.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5mcm7" for this suite.
Dec 30 11:51:48.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:51:48.241: INFO: namespace: e2e-tests-container-lifecycle-hook-5mcm7, resource: bindings, ignored listing per whitelist
Dec 30 11:51:49.291: INFO: namespace e2e-tests-container-lifecycle-hook-5mcm7 deletion completed in 25.210186908s

• [SLOW TEST:59.770 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:51:49.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 11:51:49.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-5kh7h" to be "success or failure"
Dec 30 11:51:49.640: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 125.008863ms
Dec 30 11:51:51.659: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143702114s
Dec 30 11:51:53.672: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157511599s
Dec 30 11:51:55.813: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298186336s
Dec 30 11:51:58.424: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909473759s
Dec 30 11:52:00.451: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.936089495s
STEP: Saw pod success
Dec 30 11:52:00.451: INFO: Pod "downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:52:00.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 11:52:00.964: INFO: Waiting for pod downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005 to disappear
Dec 30 11:52:00.997: INFO: Pod downwardapi-volume-c34b8f63-2afa-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:52:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5kh7h" for this suite.
Dec 30 11:52:07.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:52:07.314: INFO: namespace: e2e-tests-projected-5kh7h, resource: bindings, ignored listing per whitelist
Dec 30 11:52:07.349: INFO: namespace e2e-tests-projected-5kh7h deletion completed in 6.333456952s

• [SLOW TEST:18.058 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:52:07.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 30 11:55:11.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:11.080: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:13.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:13.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:15.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:15.114: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:17.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:17.120: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:19.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:19.100: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:21.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:21.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:23.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:23.147: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:25.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:25.118: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:27.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:27.143: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:29.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:29.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:31.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:31.097: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:33.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:33.100: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:35.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:35.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:37.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:37.099: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:39.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:39.097: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:41.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:41.098: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:43.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:43.096: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:45.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:45.156: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:47.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:47.097: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:49.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:49.099: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:51.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:51.162: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:53.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:53.091: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:55.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:55.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:57.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:57.146: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:55:59.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:55:59.090: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:01.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:01.093: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:03.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:03.123: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:05.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:05.115: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:07.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:07.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:09.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:09.109: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:11.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:11.099: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:13.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:13.095: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:15.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:15.102: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:17.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:17.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:19.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:19.125: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:21.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:21.099: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:23.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:23.096: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:25.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:25.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:27.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:27.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:29.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:29.098: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:31.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:31.093: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:33.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:33.110: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:35.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:35.124: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:37.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:37.094: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 11:56:39.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 11:56:39.096: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:56:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6dnww" for this suite.
Dec 30 11:57:03.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:57:03.248: INFO: namespace: e2e-tests-container-lifecycle-hook-6dnww, resource: bindings, ignored listing per whitelist
Dec 30 11:57:03.500: INFO: namespace e2e-tests-container-lifecycle-hook-6dnww deletion completed in 24.390855125s

• [SLOW TEST:296.151 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:57:03.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 11:57:03.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:57:13.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gkqsb" for this suite.
Dec 30 11:57:55.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:57:56.138: INFO: namespace: e2e-tests-pods-gkqsb, resource: bindings, ignored listing per whitelist
Dec 30 11:57:56.168: INFO: namespace e2e-tests-pods-gkqsb deletion completed in 42.239560568s

• [SLOW TEST:52.668 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:57:56.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 30 11:57:56.391: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 30 11:57:56.407: INFO: Waiting for terminating namespaces to be deleted...
Dec 30 11:57:56.409: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 30 11:57:56.424: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 11:57:56.424: INFO: 	Container coredns ready: true, restart count 0
Dec 30 11:57:56.424: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 30 11:57:56.424: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 11:57:56.424: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:57:56.424: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 30 11:57:56.424: INFO: 	Container weave ready: true, restart count 0
Dec 30 11:57:56.424: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 11:57:56.424: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 11:57:56.424: INFO: 	Container coredns ready: true, restart count 0
Dec 30 11:57:56.424: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:57:56.424: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 11:57:56.424: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a41d1f9a-2afb-11ea-8970-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-a41d1f9a-2afb-11ea-8970-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a41d1f9a-2afb-11ea-8970-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:58:19.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-rhw4b" for this suite.
Dec 30 11:58:33.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:58:33.281: INFO: namespace: e2e-tests-sched-pred-rhw4b, resource: bindings, ignored listing per whitelist
Dec 30 11:58:33.300: INFO: namespace e2e-tests-sched-pred-rhw4b deletion completed in 14.215339657s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.131 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:58:33.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 30 11:58:43.541: INFO: Pod pod-hostip-b4191ae3-2afb-11ea-8970-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:58:43.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-smqgf" for this suite.
Dec 30 11:59:07.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:59:07.674: INFO: namespace: e2e-tests-pods-smqgf, resource: bindings, ignored listing per whitelist
Dec 30 11:59:07.729: INFO: namespace e2e-tests-pods-smqgf deletion completed in 24.179430675s

• [SLOW TEST:34.429 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:59:07.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:59:20.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lshs8" for this suite.
Dec 30 11:59:26.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:59:26.417: INFO: namespace: e2e-tests-emptydir-wrapper-lshs8, resource: bindings, ignored listing per whitelist
Dec 30 11:59:26.739: INFO: namespace e2e-tests-emptydir-wrapper-lshs8 deletion completed in 6.494683513s

• [SLOW TEST:19.010 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:59:26.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 11:59:27.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-24mrz" to be "success or failure"
Dec 30 11:59:27.216: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.743006ms
Dec 30 11:59:29.525: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435536111s
Dec 30 11:59:31.551: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.461935927s
Dec 30 11:59:33.954: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.864682511s
Dec 30 11:59:35.984: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.895485286s
Dec 30 11:59:38.075: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.986369691s
STEP: Saw pod success
Dec 30 11:59:38.076: INFO: Pod "downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 11:59:38.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 11:59:38.579: INFO: Waiting for pod downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005 to disappear
Dec 30 11:59:38.605: INFO: Pod downwardapi-volume-d3f932e3-2afb-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 11:59:38.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-24mrz" for this suite.
Dec 30 11:59:46.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 11:59:46.848: INFO: namespace: e2e-tests-downward-api-24mrz, resource: bindings, ignored listing per whitelist
Dec 30 11:59:46.852: INFO: namespace e2e-tests-downward-api-24mrz deletion completed in 8.239123089s

• [SLOW TEST:20.113 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 11:59:46.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4lsqz
Dec 30 11:59:57.116: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4lsqz
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 11:59:57.120: INFO: Initial restart count of pod liveness-http is 0
Dec 30 12:00:19.340: INFO: Restart count of pod e2e-tests-container-probe-4lsqz/liveness-http is now 1 (22.220497491s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:00:19.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4lsqz" for this suite.
Dec 30 12:00:25.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:00:25.744: INFO: namespace: e2e-tests-container-probe-4lsqz, resource: bindings, ignored listing per whitelist
Dec 30 12:00:25.762: INFO: namespace e2e-tests-container-probe-4lsqz deletion completed in 6.193362934s

• [SLOW TEST:38.909 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:00:25.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 30 12:00:34.123: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-f71fa38d-2afb-11ea-8970-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-7fk8c", SelfLink:"/api/v1/namespaces/e2e-tests-pods-7fk8c/pods/pod-submit-remove-f71fa38d-2afb-11ea-8970-0242ac110005", UID:"f726049e-2afb-11ea-a994-fa163e34d433", ResourceVersion:"16568351", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713304025, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"942522957"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mgm9x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0019b7e00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mgm9x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000539e28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e57380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000539ec0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000539ef0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000539ef8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000539efc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713304026, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713304033, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713304033, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713304025, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0013b6bc0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0013b6c00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://285dcfe49b36dafdc9b19cac2993a1fc0eda1ad81530beaaa4730b75890126b8"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:00:41.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7fk8c" for this suite.
Dec 30 12:00:47.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:00:47.254: INFO: namespace: e2e-tests-pods-7fk8c, resource: bindings, ignored listing per whitelist
Dec 30 12:00:47.354: INFO: namespace e2e-tests-pods-7fk8c deletion completed in 6.241317262s

• [SLOW TEST:21.592 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:00:47.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 30 12:00:47.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:00:49.606: INFO: stderr: ""
Dec 30 12:00:49.607: INFO: stdout: "pod/pause created\n"
Dec 30 12:00:49.607: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 30 12:00:49.607: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-swg5c" to be "running and ready"
Dec 30 12:00:49.695: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 88.10847ms
Dec 30 12:00:51.845: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238601924s
Dec 30 12:00:53.864: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257276875s
Dec 30 12:00:55.890: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283114001s
Dec 30 12:00:57.922: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315028441s
Dec 30 12:00:59.941: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.334381245s
Dec 30 12:00:59.941: INFO: Pod "pause" satisfied condition "running and ready"
Dec 30 12:00:59.941: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 30 12:00:59.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:00.218: INFO: stderr: ""
Dec 30 12:01:00.218: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 30 12:01:00.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:00.371: INFO: stderr: ""
Dec 30 12:01:00.371: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 30 12:01:00.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:00.614: INFO: stderr: ""
Dec 30 12:01:00.614: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 30 12:01:00.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:00.709: INFO: stderr: ""
Dec 30 12:01:00.709: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 30 12:01:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:00.915: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:01:00.915: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 30 12:01:00.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-swg5c'
Dec 30 12:01:01.162: INFO: stderr: "No resources found.\n"
Dec 30 12:01:01.162: INFO: stdout: ""
Dec 30 12:01:01.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-swg5c -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 12:01:01.277: INFO: stderr: ""
Dec 30 12:01:01.277: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:01:01.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-swg5c" for this suite.
Dec 30 12:01:07.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:01:07.375: INFO: namespace: e2e-tests-kubectl-swg5c, resource: bindings, ignored listing per whitelist
Dec 30 12:01:07.485: INFO: namespace e2e-tests-kubectl-swg5c deletion completed in 6.201649967s

• [SLOW TEST:20.129 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:01:07.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 30 12:01:07.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27vw6'
Dec 30 12:01:08.081: INFO: stderr: ""
Dec 30 12:01:08.081: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 30 12:01:09.100: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:09.100: INFO: Found 0 / 1
Dec 30 12:01:10.227: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:10.228: INFO: Found 0 / 1
Dec 30 12:01:11.101: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:11.101: INFO: Found 0 / 1
Dec 30 12:01:12.103: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:12.103: INFO: Found 0 / 1
Dec 30 12:01:13.126: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:13.126: INFO: Found 0 / 1
Dec 30 12:01:14.108: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:14.109: INFO: Found 0 / 1
Dec 30 12:01:15.300: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:15.300: INFO: Found 0 / 1
Dec 30 12:01:16.115: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:16.115: INFO: Found 0 / 1
Dec 30 12:01:17.092: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:17.092: INFO: Found 0 / 1
Dec 30 12:01:18.103: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:18.103: INFO: Found 0 / 1
Dec 30 12:01:19.098: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:19.098: INFO: Found 1 / 1
Dec 30 12:01:19.098: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 30 12:01:19.105: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:01:19.105: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 30 12:01:19.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6'
Dec 30 12:01:19.293: INFO: stderr: ""
Dec 30 12:01:19.293: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Dec 12:01:17.193 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 12:01:17.193 # Server started, Redis version 3.2.12\n1:M 30 Dec 12:01:17.193 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 12:01:17.194 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 30 12:01:19.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6 --tail=1'
Dec 30 12:01:19.650: INFO: stderr: ""
Dec 30 12:01:19.650: INFO: stdout: "1:M 30 Dec 12:01:17.194 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 30 12:01:19.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6 --limit-bytes=1'
Dec 30 12:01:19.900: INFO: stderr: ""
Dec 30 12:01:19.900: INFO: stdout: " "
STEP: exposing timestamps
Dec 30 12:01:19.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6 --tail=1 --timestamps'
Dec 30 12:01:20.069: INFO: stderr: ""
Dec 30 12:01:20.069: INFO: stdout: "2019-12-30T12:01:17.194793947Z 1:M 30 Dec 12:01:17.194 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 30 12:01:22.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6 --since=1s'
Dec 30 12:01:22.860: INFO: stderr: ""
Dec 30 12:01:22.860: INFO: stdout: ""
Dec 30 12:01:22.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dm6zw redis-master --namespace=e2e-tests-kubectl-27vw6 --since=24h'
Dec 30 12:01:23.005: INFO: stderr: ""
Dec 30 12:01:23.005: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Dec 12:01:17.193 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 12:01:17.193 # Server started, Redis version 3.2.12\n1:M 30 Dec 12:01:17.193 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 12:01:17.194 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 30 12:01:23.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27vw6'
Dec 30 12:01:23.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:01:23.170: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 30 12:01:23.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-27vw6'
Dec 30 12:01:23.260: INFO: stderr: "No resources found.\n"
Dec 30 12:01:23.260: INFO: stdout: ""
Dec 30 12:01:23.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-27vw6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 12:01:23.352: INFO: stderr: ""
Dec 30 12:01:23.352: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:01:23.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-27vw6" for this suite.
Dec 30 12:01:47.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:01:47.511: INFO: namespace: e2e-tests-kubectl-27vw6, resource: bindings, ignored listing per whitelist
Dec 30 12:01:47.630: INFO: namespace e2e-tests-kubectl-27vw6 deletion completed in 24.27114067s

• [SLOW TEST:40.146 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:01:47.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 30 12:01:47.860: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 30 12:01:47.868: INFO: Waiting for terminating namespaces to be deleted...
Dec 30 12:01:47.871: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 30 12:01:47.916: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 12:01:47.916: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 12:01:47.917: INFO: 	Container coredns ready: true, restart count 0
Dec 30 12:01:47.917: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 30 12:01:47.917: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 12:01:47.917: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 12:01:47.917: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 30 12:01:47.917: INFO: 	Container weave ready: true, restart count 0
Dec 30 12:01:47.917: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 12:01:47.917: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 30 12:01:47.917: INFO: 	Container coredns ready: true, restart count 0
Dec 30 12:01:47.917: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 30 12:01:47.917: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e5247005865556], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:01:49.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-lqjbd" for this suite.
Dec 30 12:01:57.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:01:57.375: INFO: namespace: e2e-tests-sched-pred-lqjbd, resource: bindings, ignored listing per whitelist
Dec 30 12:01:57.406: INFO: namespace e2e-tests-sched-pred-lqjbd deletion completed in 8.385977037s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:9.776 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:01:57.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-2dc3f0a7-2afc-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:01:57.652: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-qrlv4" to be "success or failure"
Dec 30 12:01:57.670: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.154101ms
Dec 30 12:01:59.687: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035809086s
Dec 30 12:02:01.706: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054109939s
Dec 30 12:02:03.724: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072293208s
Dec 30 12:02:06.297: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.645410562s
Dec 30 12:02:08.320: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668146543s
STEP: Saw pod success
Dec 30 12:02:08.320: INFO: Pod "pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:02:08.328: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 12:02:08.922: INFO: Waiting for pod pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:02:08.932: INFO: Pod pod-projected-configmaps-2dc4c97e-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:02:08.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qrlv4" for this suite.
Dec 30 12:02:15.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:02:15.230: INFO: namespace: e2e-tests-projected-qrlv4, resource: bindings, ignored listing per whitelist
Dec 30 12:02:15.295: INFO: namespace e2e-tests-projected-qrlv4 deletion completed in 6.341737507s

• [SLOW TEST:17.888 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:02:15.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-387b4c29-2afc-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:02:15.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-g24l5" to be "success or failure"
Dec 30 12:02:15.820: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.354752ms
Dec 30 12:02:17.843: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048672492s
Dec 30 12:02:19.902: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107800498s
Dec 30 12:02:22.057: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262920804s
Dec 30 12:02:24.315: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520948441s
Dec 30 12:02:26.356: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.561996095s
STEP: Saw pod success
Dec 30 12:02:26.357: INFO: Pod "pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:02:26.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 12:02:26.544: INFO: Waiting for pod pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:02:26.550: INFO: Pod pod-projected-secrets-38931fde-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:02:26.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g24l5" for this suite.
Dec 30 12:02:32.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:02:32.704: INFO: namespace: e2e-tests-projected-g24l5, resource: bindings, ignored listing per whitelist
Dec 30 12:02:32.783: INFO: namespace e2e-tests-projected-g24l5 deletion completed in 6.225604661s

• [SLOW TEST:17.487 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:02:32.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 30 12:02:32.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 30 12:02:33.183: INFO: stderr: ""
Dec 30 12:02:33.183: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:02:33.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6qxpq" for this suite.
Dec 30 12:02:39.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:02:39.340: INFO: namespace: e2e-tests-kubectl-6qxpq, resource: bindings, ignored listing per whitelist
Dec 30 12:02:39.475: INFO: namespace e2e-tests-kubectl-6qxpq deletion completed in 6.278176516s

• [SLOW TEST:6.692 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:02:39.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-46d8e1db-2afc-11ea-8970-0242ac110005
STEP: Creating secret with name s-test-opt-upd-46d8e25f-2afc-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-46d8e1db-2afc-11ea-8970-0242ac110005
STEP: Updating secret s-test-opt-upd-46d8e25f-2afc-11ea-8970-0242ac110005
STEP: Creating secret with name s-test-opt-create-46d8e294-2afc-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:04:08.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fgmdf" for this suite.
Dec 30 12:04:32.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:04:32.224: INFO: namespace: e2e-tests-projected-fgmdf, resource: bindings, ignored listing per whitelist
Dec 30 12:04:32.330: INFO: namespace e2e-tests-projected-fgmdf deletion completed in 24.269654409s

• [SLOW TEST:112.854 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:04:32.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:04:32.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-mnkr5" to be "success or failure"
Dec 30 12:04:32.904: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.160575ms
Dec 30 12:04:34.964: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101488892s
Dec 30 12:04:36.998: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134997262s
Dec 30 12:04:39.011: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148054s
Dec 30 12:04:41.219: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355619638s
Dec 30 12:04:43.435: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.571881915s
STEP: Saw pod success
Dec 30 12:04:43.435: INFO: Pod "downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:04:43.455: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:04:43.580: INFO: Waiting for pod downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:04:43.592: INFO: Pod downwardapi-volume-8a3c4a45-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:04:43.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mnkr5" for this suite.
Dec 30 12:04:49.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:04:49.795: INFO: namespace: e2e-tests-downward-api-mnkr5, resource: bindings, ignored listing per whitelist
Dec 30 12:04:49.892: INFO: namespace e2e-tests-downward-api-mnkr5 deletion completed in 6.287322734s

• [SLOW TEST:17.562 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:04:49.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 30 12:05:00.732: INFO: Successfully updated pod "pod-update-9491ff7a-2afc-11ea-8970-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Dec 30 12:05:00.740: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:05:00.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-78rsm" for this suite.
Dec 30 12:05:24.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:05:24.912: INFO: namespace: e2e-tests-pods-78rsm, resource: bindings, ignored listing per whitelist
Dec 30 12:05:24.921: INFO: namespace e2e-tests-pods-78rsm deletion completed in 24.174718325s

• [SLOW TEST:35.028 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:05:24.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 30 12:05:25.054: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 30 12:05:25.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:25.612: INFO: stderr: ""
Dec 30 12:05:25.612: INFO: stdout: "service/redis-slave created\n"
Dec 30 12:05:25.612: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 30 12:05:25.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:26.089: INFO: stderr: ""
Dec 30 12:05:26.089: INFO: stdout: "service/redis-master created\n"
Dec 30 12:05:26.090: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 30 12:05:26.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:26.678: INFO: stderr: ""
Dec 30 12:05:26.678: INFO: stdout: "service/frontend created\n"
Dec 30 12:05:26.680: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 30 12:05:26.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:27.101: INFO: stderr: ""
Dec 30 12:05:27.101: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 30 12:05:27.102: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 30 12:05:27.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:27.561: INFO: stderr: ""
Dec 30 12:05:27.561: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 30 12:05:27.562: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 30 12:05:27.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:28.010: INFO: stderr: ""
Dec 30 12:05:28.010: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 30 12:05:28.010: INFO: Waiting for all frontend pods to be Running.
Dec 30 12:05:53.062: INFO: Waiting for frontend to serve content.
Dec 30 12:05:53.143: INFO: Trying to add a new entry to the guestbook.
Dec 30 12:05:53.179: INFO: Verifying that added entry can be retrieved.
Dec 30 12:05:53.765: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 30 12:05:58.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:59.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:05:59.155: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 12:05:59.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:59.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:05:59.497: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 12:05:59.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:59.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:05:59.703: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 12:05:59.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:05:59.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:05:59.863: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 12:05:59.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:06:00.286: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:06:00.287: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 12:06:00.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p9cq4'
Dec 30 12:06:00.708: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 12:06:00.708: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:06:00.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p9cq4" for this suite.
Dec 30 12:06:44.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:06:45.085: INFO: namespace: e2e-tests-kubectl-p9cq4, resource: bindings, ignored listing per whitelist
Dec 30 12:06:45.154: INFO: namespace e2e-tests-kubectl-p9cq4 deletion completed in 44.40286642s

• [SLOW TEST:80.233 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:06:45.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d94332d7-2afc-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:06:45.358: INFO: Waiting up to 5m0s for pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-q8rsf" to be "success or failure"
Dec 30 12:06:45.380: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.5139ms
Dec 30 12:06:47.447: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089296517s
Dec 30 12:06:49.465: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107302662s
Dec 30 12:06:51.981: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623077688s
Dec 30 12:06:54.017: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659282861s
Dec 30 12:06:56.032: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.674237388s
STEP: Saw pod success
Dec 30 12:06:56.032: INFO: Pod "pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:06:56.040: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:06:56.920: INFO: Waiting for pod pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:06:56.935: INFO: Pod pod-secrets-d943dcee-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:06:56.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-q8rsf" for this suite.
Dec 30 12:07:02.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:07:03.161: INFO: namespace: e2e-tests-secrets-q8rsf, resource: bindings, ignored listing per whitelist
Dec 30 12:07:03.215: INFO: namespace e2e-tests-secrets-q8rsf deletion completed in 6.260038178s

• [SLOW TEST:18.061 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:07:03.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 30 12:07:03.504: INFO: Waiting up to 5m0s for pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-5qdj2" to be "success or failure"
Dec 30 12:07:03.521: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.082184ms
Dec 30 12:07:05.542: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038356464s
Dec 30 12:07:07.641: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136962465s
Dec 30 12:07:09.875: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371117808s
Dec 30 12:07:11.893: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388991743s
Dec 30 12:07:14.258: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.753862065s
STEP: Saw pod success
Dec 30 12:07:14.258: INFO: Pod "pod-e40a92a9-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:07:14.272: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e40a92a9-2afc-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:07:14.642: INFO: Waiting for pod pod-e40a92a9-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:07:14.658: INFO: Pod pod-e40a92a9-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:07:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5qdj2" for this suite.
Dec 30 12:07:20.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:07:20.935: INFO: namespace: e2e-tests-emptydir-5qdj2, resource: bindings, ignored listing per whitelist
Dec 30 12:07:20.992: INFO: namespace e2e-tests-emptydir-5qdj2 deletion completed in 6.315637617s

• [SLOW TEST:17.776 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:07:20.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-eea1bd39-2afc-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:07:21.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-8r5x7" to be "success or failure"
Dec 30 12:07:21.275: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.712877ms
Dec 30 12:07:23.688: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468093293s
Dec 30 12:07:25.700: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48068517s
Dec 30 12:07:27.719: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499486846s
Dec 30 12:07:29.998: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.77859958s
Dec 30 12:07:32.267: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.047776108s
STEP: Saw pod success
Dec 30 12:07:32.267: INFO: Pod "pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:07:32.276: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 12:07:32.673: INFO: Waiting for pod pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:07:32.690: INFO: Pod pod-projected-configmaps-eea37b5a-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:07:32.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8r5x7" for this suite.
Dec 30 12:07:38.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:07:38.961: INFO: namespace: e2e-tests-projected-8r5x7, resource: bindings, ignored listing per whitelist
Dec 30 12:07:38.969: INFO: namespace e2e-tests-projected-8r5x7 deletion completed in 6.260021777s

• [SLOW TEST:17.976 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:07:38.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f949cf19-2afc-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:07:39.162: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-f6jcf" to be "success or failure"
Dec 30 12:07:39.173: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326209ms
Dec 30 12:07:41.185: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022435638s
Dec 30 12:07:43.201: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038773542s
Dec 30 12:07:45.230: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067086659s
Dec 30 12:07:47.258: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095204844s
Dec 30 12:07:49.275: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112731879s
STEP: Saw pod success
Dec 30 12:07:49.275: INFO: Pod "pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:07:49.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 12:07:49.367: INFO: Waiting for pod pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005 to disappear
Dec 30 12:07:49.379: INFO: Pod pod-projected-secrets-f94adde9-2afc-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:07:49.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f6jcf" for this suite.
Dec 30 12:07:55.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:07:55.598: INFO: namespace: e2e-tests-projected-f6jcf, resource: bindings, ignored listing per whitelist
Dec 30 12:07:55.720: INFO: namespace e2e-tests-projected-f6jcf deletion completed in 6.331907882s

• [SLOW TEST:16.751 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:07:55.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 30 12:08:08.687: INFO: Successfully updated pod "labelsupdate0359e338-2afd-11ea-8970-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:08:10.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jwtvn" for this suite.
Dec 30 12:08:34.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:08:35.025: INFO: namespace: e2e-tests-projected-jwtvn, resource: bindings, ignored listing per whitelist
Dec 30 12:08:35.033: INFO: namespace e2e-tests-projected-jwtvn deletion completed in 24.200248011s

• [SLOW TEST:39.313 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:08:35.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:08:35.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-6mphj" to be "success or failure"
Dec 30 12:08:35.482: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.501507ms
Dec 30 12:08:37.678: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296507786s
Dec 30 12:08:39.696: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315212762s
Dec 30 12:08:41.747: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366228695s
Dec 30 12:08:44.193: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812019688s
Dec 30 12:08:46.220: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.838532628s
STEP: Saw pod success
Dec 30 12:08:46.220: INFO: Pod "downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:08:46.229: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:08:46.557: INFO: Waiting for pod downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005 to disappear
Dec 30 12:08:46.579: INFO: Pod downwardapi-volume-1ad5def8-2afd-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:08:46.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6mphj" for this suite.
Dec 30 12:08:52.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:08:52.772: INFO: namespace: e2e-tests-projected-6mphj, resource: bindings, ignored listing per whitelist
Dec 30 12:08:52.934: INFO: namespace e2e-tests-projected-6mphj deletion completed in 6.336017208s

• [SLOW TEST:17.900 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:08:52.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 30 12:08:53.141: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix372130461/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:08:53.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mvhsr" for this suite.
Dec 30 12:08:59.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:08:59.409: INFO: namespace: e2e-tests-kubectl-mvhsr, resource: bindings, ignored listing per whitelist
Dec 30 12:08:59.536: INFO: namespace e2e-tests-kubectl-mvhsr deletion completed in 6.266348473s

• [SLOW TEST:6.602 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:08:59.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-29605e47-2afd-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-29605e47-2afd-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:09:10.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v5vcp" for this suite.
Dec 30 12:09:34.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:09:34.226: INFO: namespace: e2e-tests-projected-v5vcp, resource: bindings, ignored listing per whitelist
Dec 30 12:09:34.259: INFO: namespace e2e-tests-projected-v5vcp deletion completed in 24.162508232s

• [SLOW TEST:34.722 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:09:34.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3e047021-2afd-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:09:34.493: INFO: Waiting up to 5m0s for pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-x8kv7" to be "success or failure"
Dec 30 12:09:34.508: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.638574ms
Dec 30 12:09:36.628: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134752213s
Dec 30 12:09:38.650: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157024473s
Dec 30 12:09:40.670: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177040077s
Dec 30 12:09:42.683: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190137199s
Dec 30 12:09:44.738: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244863273s
STEP: Saw pod success
Dec 30 12:09:44.738: INFO: Pod "pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:09:44.745: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:09:45.829: INFO: Waiting for pod pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005 to disappear
Dec 30 12:09:45.845: INFO: Pod pod-secrets-3e0f405b-2afd-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:09:45.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x8kv7" for this suite.
Dec 30 12:09:51.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:09:51.945: INFO: namespace: e2e-tests-secrets-x8kv7, resource: bindings, ignored listing per whitelist
Dec 30 12:09:52.068: INFO: namespace e2e-tests-secrets-x8kv7 deletion completed in 6.207125581s

• [SLOW TEST:17.810 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:09:52.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-tjvbx
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-tjvbx
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-tjvbx
Dec 30 12:09:52.278: INFO: Found 0 stateful pods, waiting for 1
Dec 30 12:10:02.300: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 30 12:10:02.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 12:10:03.099: INFO: stderr: ""
Dec 30 12:10:03.100: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 12:10:03.100: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 12:10:03.134: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 30 12:10:13.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 12:10:13.158: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 12:10:13.313: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:13.313: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:13.313: INFO: ss-1                              Pending         []
Dec 30 12:10:13.313: INFO: 
Dec 30 12:10:13.313: INFO: StatefulSet ss has not reached scale 3, at 2
Dec 30 12:10:14.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.893179424s
Dec 30 12:10:16.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.869818888s
Dec 30 12:10:17.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.143275695s
Dec 30 12:10:18.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.122998612s
Dec 30 12:10:19.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.08943466s
Dec 30 12:10:20.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.066720747s
Dec 30 12:10:22.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.238643312s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-tjvbx
Dec 30 12:10:23.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:10:24.081: INFO: stderr: ""
Dec 30 12:10:24.081: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 12:10:24.081: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 12:10:24.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:10:24.729: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 30 12:10:24.729: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 12:10:24.729: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 12:10:24.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:10:25.407: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 30 12:10:25.407: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 12:10:25.407: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 12:10:25.491: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 12:10:25.492: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 12:10:25.492: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 30 12:10:35.583: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 12:10:35.583: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 12:10:35.583: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 30 12:10:35.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 12:10:36.128: INFO: stderr: ""
Dec 30 12:10:36.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 12:10:36.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 12:10:36.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 12:10:36.812: INFO: stderr: ""
Dec 30 12:10:36.813: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 12:10:36.813: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 12:10:36.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 12:10:37.274: INFO: stderr: ""
Dec 30 12:10:37.274: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 12:10:37.274: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 12:10:37.274: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 12:10:37.291: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 30 12:10:47.322: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 12:10:47.322: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 12:10:47.322: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 12:10:47.474: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:47.474: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:47.474: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:47.474: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:47.474: INFO: 
Dec 30 12:10:47.475: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:48.843: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:48.843: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:48.843: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:48.843: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:48.843: INFO: 
Dec 30 12:10:48.843: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:50.084: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:50.084: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:50.084: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:50.084: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:50.084: INFO: 
Dec 30 12:10:50.084: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:51.099: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:51.099: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:51.099: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:51.099: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:51.099: INFO: 
Dec 30 12:10:51.099: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:52.313: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:52.314: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:52.314: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:52.314: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:52.314: INFO: 
Dec 30 12:10:52.314: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:53.333: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:53.333: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:53.333: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:53.333: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:53.333: INFO: 
Dec 30 12:10:53.333: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:54.963: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:54.963: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:54.964: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:54.964: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:54.964: INFO: 
Dec 30 12:10:54.964: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 12:10:55.998: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:55.998: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:55.998: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:13 +0000 UTC  }]
Dec 30 12:10:55.999: INFO: 
Dec 30 12:10:55.999: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 30 12:10:57.012: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 30 12:10:57.012: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:10:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:09:52 +0000 UTC  }]
Dec 30 12:10:57.012: INFO: 
Dec 30 12:10:57.012: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-tjvbx
Dec 30 12:10:58.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:10:58.271: INFO: rc: 1
Dec 30 12:10:58.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0015796b0 exit status 1   true [0xc00023a440 0xc00023a468 0xc00023a490] [0xc00023a440 0xc00023a468 0xc00023a490] [0xc00023a450 0xc00023a488] [0x935700 0x935700] 0xc0026d3e60 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 30 12:11:08.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:08.405: INFO: rc: 1
Dec 30 12:11:08.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010d2150 exit status 1   true [0xc00040cf08 0xc00040d080 0xc00040d100] [0xc00040cf08 0xc00040d080 0xc00040d100] [0xc00040cfa0 0xc00040d0d0] [0x935700 0x935700] 0xc0022aa5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:11:18.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:18.602: INFO: rc: 1
Dec 30 12:11:18.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000a39290 exit status 1   true [0xc00056d330 0xc00056d398 0xc00056d408] [0xc00056d330 0xc00056d398 0xc00056d408] [0xc00056d388 0xc00056d400] [0x935700 0x935700] 0xc0022126c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:11:28.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:28.717: INFO: rc: 1
Dec 30 12:11:28.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000a393e0 exit status 1   true [0xc00056d418 0xc00056d488 0xc00056d548] [0xc00056d418 0xc00056d488 0xc00056d548] [0xc00056d468 0xc00056d508] [0x935700 0x935700] 0xc002212960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:11:38.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:38.888: INFO: rc: 1
Dec 30 12:11:38.888: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000a39500 exit status 1   true [0xc00056d598 0xc00056d5f8 0xc00056d620] [0xc00056d598 0xc00056d5f8 0xc00056d620] [0xc00056d5e0 0xc00056d618] [0x935700 0x935700] 0xc002212c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:11:48.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:49.010: INFO: rc: 1
Dec 30 12:11:49.010: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000a39860 exit status 1   true [0xc00056d638 0xc00056d688 0xc00056d6b8] [0xc00056d638 0xc00056d688 0xc00056d6b8] [0xc00056d658 0xc00056d6b0] [0x935700 0x935700] 0xc002212ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:11:59.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:11:59.141: INFO: rc: 1
Dec 30 12:11:59.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001579800 exit status 1   true [0xc00023a498 0xc00023a4b0 0xc00023a4c8] [0xc00023a498 0xc00023a4b0 0xc00023a4c8] [0xc00023a4a8 0xc00023a4c0] [0x935700 0x935700] 0xc001ae4120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:12:09.306: INFO: rc: 1
Dec 30 12:12:09.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001734120 exit status 1   true [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6010 0xc001ae6028] [0x935700 0x935700] 0xc0025e07e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:19.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:12:19.477: INFO: rc: 1
Dec 30 12:12:19.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580120 exit status 1   true [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c1d8 0xc00040c3e0] [0x935700 0x935700] 0xc001c48600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:29.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:12:29.598: INFO: rc: 1
Dec 30 12:12:29.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580240 exit status 1   true [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c738 0xc00040c8d8] [0x935700 0x935700] 0xc001c48d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:39.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:12:39.756: INFO: rc: 1
Dec 30 12:12:39.756: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002454150 exit status 1   true [0xc00023a030 0xc00023a070 0xc00023a0c0] [0xc00023a030 0xc00023a070 0xc00023a0c0] [0xc00023a068 0xc00023a0a8] [0x935700 0x935700] 0xc0026d21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:49.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:12:49.913: INFO: rc: 1
Dec 30 12:12:49.913: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580420 exit status 1   true [0xc00040c930 0xc00040cab0 0xc00040cb88] [0xc00040c930 0xc00040cab0 0xc00040cb88] [0xc00040ca70 0xc00040cb80] [0x935700 0x935700] 0xc001c49140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:12:59.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:00.027: INFO: rc: 1
Dec 30 12:13:00.027: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002454270 exit status 1   true [0xc00023a0d8 0xc00023a0f0 0xc00023a108] [0xc00023a0d8 0xc00023a0f0 0xc00023a108] [0xc00023a0e8 0xc00023a100] [0x935700 0x935700] 0xc0026d2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:13:10.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:10.151: INFO: rc: 1
Dec 30 12:13:10.151: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001734480 exit status 1   true [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6048 0xc001ae6060] [0x935700 0x935700] 0xc0025e0ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:13:20.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:20.284: INFO: rc: 1
Dec 30 12:13:20.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017345a0 exit status 1   true [0xc001ae6070 0xc001ae6088 0xc001ae60a0] [0xc001ae6070 0xc001ae6088 0xc001ae60a0] [0xc001ae6080 0xc001ae6098] [0x935700 0x935700] 0xc0025e0d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:13:30.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:30.428: INFO: rc: 1
Dec 30 12:13:30.428: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025809f0 exit status 1   true [0xc00040cba0 0xc00040ccc0 0xc00040ce10] [0xc00040cba0 0xc00040ccc0 0xc00040ce10] [0xc00040cc38 0xc00040cda0] [0x935700 0x935700] 0xc001c494a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:13:40.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:40.637: INFO: rc: 1
Dec 30 12:13:40.638: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580b10 exit status 1   true [0xc00040ce30 0xc00040ced8 0xc00040cfa0] [0xc00040ce30 0xc00040ced8 0xc00040cfa0] [0xc00040cec0 0xc00040cf48] [0x935700 0x935700] 0xc001c49740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:13:50.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:13:50.778: INFO: rc: 1
Dec 30 12:13:50.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580c60 exit status 1   true [0xc00040d080 0xc00040d100 0xc00040d1e0] [0xc00040d080 0xc00040d100 0xc00040d1e0] [0xc00040d0d0 0xc00040d198] [0x935700 0x935700] 0xc001c49c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:00.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:00.916: INFO: rc: 1
Dec 30 12:14:00.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580db0 exit status 1   true [0xc00040d218 0xc00040d268 0xc00040d2f8] [0xc00040d218 0xc00040d268 0xc00040d2f8] [0xc00040d258 0xc00040d2c8] [0x935700 0x935700] 0xc001c49f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:10.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:11.051: INFO: rc: 1
Dec 30 12:14:11.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d40090 exit status 1   true [0xc00056c008 0xc00056c0e0 0xc00056c1d8] [0xc00056c008 0xc00056c0e0 0xc00056c1d8] [0xc00056c0b0 0xc00056c1c0] [0x935700 0x935700] 0xc0025d61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:21.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:21.192: INFO: rc: 1
Dec 30 12:14:21.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002228180 exit status 1   true [0xc00016e000 0xc00056c2e8 0xc00056c3a8] [0xc00016e000 0xc00056c2e8 0xc00056c3a8] [0xc00056c248 0xc00056c3a0] [0x935700 0x935700] 0xc001c48600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:31.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:31.340: INFO: rc: 1
Dec 30 12:14:31.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d401e0 exit status 1   true [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c088 0xc00040c218 0xc00040c518] [0xc00040c1d8 0xc00040c3e0] [0x935700 0x935700] 0xc001ae4120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:41.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:41.482: INFO: rc: 1
Dec 30 12:14:41.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d40300 exit status 1   true [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c588 0xc00040c818 0xc00040c8f8] [0xc00040c738 0xc00040c8d8] [0x935700 0x935700] 0xc001ae43c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:14:51.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:14:51.675: INFO: rc: 1
Dec 30 12:14:51.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580150 exit status 1   true [0xc00023a030 0xc00023a070 0xc00023a0c0] [0xc00023a030 0xc00023a070 0xc00023a0c0] [0xc00023a068 0xc00023a0a8] [0x935700 0x935700] 0xc0025d6600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:01.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:01.836: INFO: rc: 1
Dec 30 12:15:01.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002454120 exit status 1   true [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6010 0xc001ae6028] [0x935700 0x935700] 0xc0026d21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:11.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:11.962: INFO: rc: 1
Dec 30 12:15:11.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024542d0 exit status 1   true [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6048 0xc001ae6060] [0x935700 0x935700] 0xc0026d2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:21.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:22.133: INFO: rc: 1
Dec 30 12:15:22.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580300 exit status 1   true [0xc00023a0d8 0xc00023a0f0 0xc00023a108] [0xc00023a0d8 0xc00023a0f0 0xc00023a108] [0xc00023a0e8 0xc00023a100] [0x935700 0x935700] 0xc0025d6960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:32.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:32.282: INFO: rc: 1
Dec 30 12:15:32.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580840 exit status 1   true [0xc00023a118 0xc00023a150 0xc00023a168] [0xc00023a118 0xc00023a150 0xc00023a168] [0xc00023a148 0xc00023a160] [0x935700 0x935700] 0xc0025d6c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:42.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:42.407: INFO: rc: 1
Dec 30 12:15:42.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002454450 exit status 1   true [0xc001ae6070 0xc001ae6088 0xc001ae60a0] [0xc001ae6070 0xc001ae6088 0xc001ae60a0] [0xc001ae6080 0xc001ae6098] [0x935700 0x935700] 0xc0026d2fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:15:52.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:15:52.615: INFO: rc: 1
Dec 30 12:15:52.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002580a80 exit status 1   true [0xc00023a170 0xc00023a188 0xc00023a1a0] [0xc00023a170 0xc00023a188 0xc00023a1a0] [0xc00023a180 0xc00023a198] [0x935700 0x935700] 0xc0025d6ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 30 12:16:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tjvbx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 12:16:02.800: INFO: rc: 1
Dec 30 12:16:02.801: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 30 12:16:02.801: INFO: Scaling statefulset ss to 0
Dec 30 12:16:03.063: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 30 12:16:03.156: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tjvbx
Dec 30 12:16:03.224: INFO: Scaling statefulset ss to 0
Dec 30 12:16:03.327: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 12:16:03.338: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:16:03.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-tjvbx" for this suite.
Dec 30 12:16:11.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:16:11.620: INFO: namespace: e2e-tests-statefulset-tjvbx, resource: bindings, ignored listing per whitelist
Dec 30 12:16:11.735: INFO: namespace e2e-tests-statefulset-tjvbx deletion completed in 8.204611287s

• [SLOW TEST:379.666 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:16:11.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-2b0cfe69-2afe-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2b0cfe69-2afe-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:17:36.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2wqb5" for this suite.
Dec 30 12:18:00.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:18:00.849: INFO: namespace: e2e-tests-configmap-2wqb5, resource: bindings, ignored listing per whitelist
Dec 30 12:18:00.952: INFO: namespace e2e-tests-configmap-2wqb5 deletion completed in 24.442196668s

• [SLOW TEST:109.217 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:18:00.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:18:01.163: INFO: Creating deployment "test-recreate-deployment"
Dec 30 12:18:01.174: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 30 12:18:01.209: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 30 12:18:03.610: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 30 12:18:03.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 12:18:05.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 12:18:07.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 12:18:09.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713305081, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 12:18:11.677: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 30 12:18:11.704: INFO: Updating deployment test-recreate-deployment
Dec 30 12:18:11.704: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 30 12:18:13.503: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-b8gjt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8gjt/deployments/test-recreate-deployment,UID:6c16133f-2afe-11ea-a994-fa163e34d433,ResourceVersion:16570486,Generation:2,CreationTimestamp:2019-12-30 12:18:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-30 12:18:12 +0000 UTC 2019-12-30 12:18:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-30 12:18:12 +0000 UTC 2019-12-30 12:18:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 30 12:18:13.990: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-b8gjt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8gjt/replicasets/test-recreate-deployment-589c4bfd,UID:72863b29-2afe-11ea-a994-fa163e34d433,ResourceVersion:16570484,Generation:1,CreationTimestamp:2019-12-30 12:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c16133f-2afe-11ea-a994-fa163e34d433 0xc002574a6f 0xc002574a80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 12:18:13.990: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 30 12:18:13.991: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-b8gjt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8gjt/replicasets/test-recreate-deployment-5bf7f65dc,UID:6c1c55a3-2afe-11ea-a994-fa163e34d433,ResourceVersion:16570473,Generation:2,CreationTimestamp:2019-12-30 12:18:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c16133f-2afe-11ea-a994-fa163e34d433 0xc002574b40 0xc002574b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 12:18:14.047: INFO: Pod "test-recreate-deployment-589c4bfd-2n2jb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2n2jb,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-b8gjt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8gjt/pods/test-recreate-deployment-589c4bfd-2n2jb,UID:729374ed-2afe-11ea-a994-fa163e34d433,ResourceVersion:16570485,Generation:0,CreationTimestamp:2019-12-30 12:18:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 72863b29-2afe-11ea-a994-fa163e34d433 0xc0022a8cbf 0xc0022a8cd0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9rhwr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9rhwr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9rhwr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a8d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a9520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:18:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:18:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:18:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:18:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:18:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:18:14.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-b8gjt" for this suite.
Dec 30 12:18:24.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:18:24.994: INFO: namespace: e2e-tests-deployment-b8gjt, resource: bindings, ignored listing per whitelist
Dec 30 12:18:25.049: INFO: namespace e2e-tests-deployment-b8gjt deletion completed in 10.737082058s

• [SLOW TEST:24.097 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:18:25.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 30 12:18:25.871: INFO: Number of nodes with available pods: 0
Dec 30 12:18:25.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:26.895: INFO: Number of nodes with available pods: 0
Dec 30 12:18:26.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:28.132: INFO: Number of nodes with available pods: 0
Dec 30 12:18:28.132: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:28.895: INFO: Number of nodes with available pods: 0
Dec 30 12:18:28.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:29.949: INFO: Number of nodes with available pods: 0
Dec 30 12:18:29.949: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:31.031: INFO: Number of nodes with available pods: 0
Dec 30 12:18:31.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:31.925: INFO: Number of nodes with available pods: 0
Dec 30 12:18:31.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:32.900: INFO: Number of nodes with available pods: 0
Dec 30 12:18:32.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:33.909: INFO: Number of nodes with available pods: 1
Dec 30 12:18:33.909: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 30 12:18:34.044: INFO: Number of nodes with available pods: 0
Dec 30 12:18:34.044: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:35.069: INFO: Number of nodes with available pods: 0
Dec 30 12:18:35.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:36.082: INFO: Number of nodes with available pods: 0
Dec 30 12:18:36.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:37.075: INFO: Number of nodes with available pods: 0
Dec 30 12:18:37.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:38.055: INFO: Number of nodes with available pods: 0
Dec 30 12:18:38.055: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:39.085: INFO: Number of nodes with available pods: 0
Dec 30 12:18:39.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:40.107: INFO: Number of nodes with available pods: 0
Dec 30 12:18:40.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:41.070: INFO: Number of nodes with available pods: 0
Dec 30 12:18:41.070: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:42.057: INFO: Number of nodes with available pods: 0
Dec 30 12:18:42.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:43.553: INFO: Number of nodes with available pods: 0
Dec 30 12:18:43.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:44.080: INFO: Number of nodes with available pods: 0
Dec 30 12:18:44.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:45.055: INFO: Number of nodes with available pods: 0
Dec 30 12:18:45.055: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:46.091: INFO: Number of nodes with available pods: 0
Dec 30 12:18:46.091: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:47.516: INFO: Number of nodes with available pods: 0
Dec 30 12:18:47.516: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:48.074: INFO: Number of nodes with available pods: 0
Dec 30 12:18:48.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:49.074: INFO: Number of nodes with available pods: 0
Dec 30 12:18:49.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:50.064: INFO: Number of nodes with available pods: 0
Dec 30 12:18:50.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:18:51.056: INFO: Number of nodes with available pods: 1
Dec 30 12:18:51.056: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-z4xfp, will wait for the garbage collector to delete the pods
Dec 30 12:18:51.126: INFO: Deleting DaemonSet.extensions daemon-set took: 12.524979ms
Dec 30 12:18:51.227: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.552295ms
Dec 30 12:19:02.787: INFO: Number of nodes with available pods: 0
Dec 30 12:19:02.787: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 12:19:02.791: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-z4xfp/daemonsets","resourceVersion":"16570608"},"items":null}

Dec 30 12:19:02.798: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-z4xfp/pods","resourceVersion":"16570608"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:19:02.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-z4xfp" for this suite.
Dec 30 12:19:08.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:19:08.963: INFO: namespace: e2e-tests-daemonsets-z4xfp, resource: bindings, ignored listing per whitelist
Dec 30 12:19:09.095: INFO: namespace e2e-tests-daemonsets-z4xfp deletion completed in 6.265624803s

• [SLOW TEST:44.046 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:19:09.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 30 12:19:09.271: INFO: Waiting up to 5m0s for pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-4m2jf" to be "success or failure"
Dec 30 12:19:09.297: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.914442ms
Dec 30 12:19:11.919: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64813838s
Dec 30 12:19:13.970: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698858098s
Dec 30 12:19:16.421: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.149946457s
Dec 30 12:19:18.470: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.198869731s
Dec 30 12:19:20.502: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.231137502s
STEP: Saw pod success
Dec 30 12:19:20.503: INFO: Pod "downward-api-94abe7b4-2afe-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:19:20.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-94abe7b4-2afe-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 12:19:20.886: INFO: Waiting for pod downward-api-94abe7b4-2afe-11ea-8970-0242ac110005 to disappear
Dec 30 12:19:20.909: INFO: Pod downward-api-94abe7b4-2afe-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:19:20.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4m2jf" for this suite.
Dec 30 12:19:26.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:19:27.046: INFO: namespace: e2e-tests-downward-api-4m2jf, resource: bindings, ignored listing per whitelist
Dec 30 12:19:27.116: INFO: namespace e2e-tests-downward-api-4m2jf deletion completed in 6.185346504s

• [SLOW TEST:18.020 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:19:27.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gnqzt
Dec 30 12:19:37.432: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gnqzt
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 12:19:37.439: INFO: Initial restart count of pod liveness-http is 0
Dec 30 12:19:59.791: INFO: Restart count of pod e2e-tests-container-probe-gnqzt/liveness-http is now 1 (22.35152437s elapsed)
Dec 30 12:20:20.041: INFO: Restart count of pod e2e-tests-container-probe-gnqzt/liveness-http is now 2 (42.602103396s elapsed)
Dec 30 12:20:40.312: INFO: Restart count of pod e2e-tests-container-probe-gnqzt/liveness-http is now 3 (1m2.873209813s elapsed)
Dec 30 12:21:00.582: INFO: Restart count of pod e2e-tests-container-probe-gnqzt/liveness-http is now 4 (1m23.142999187s elapsed)
Dec 30 12:22:13.685: INFO: Restart count of pod e2e-tests-container-probe-gnqzt/liveness-http is now 5 (2m36.245547401s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:22:13.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gnqzt" for this suite.
Dec 30 12:22:20.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:22:20.149: INFO: namespace: e2e-tests-container-probe-gnqzt, resource: bindings, ignored listing per whitelist
Dec 30 12:22:20.220: INFO: namespace e2e-tests-container-probe-gnqzt deletion completed in 6.356277568s

• [SLOW TEST:173.103 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:22:20.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:23:17.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-958l5" for this suite.
Dec 30 12:23:24.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:23:24.915: INFO: namespace: e2e-tests-container-runtime-958l5, resource: bindings, ignored listing per whitelist
Dec 30 12:23:25.112: INFO: namespace e2e-tests-container-runtime-958l5 deletion completed in 7.276109365s

• [SLOW TEST:64.892 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:23:25.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8kfdc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.240.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.240.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.240.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.240.249_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8kfdc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8kfdc.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8kfdc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.240.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.240.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.240.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.240.249_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 12:23:39.615: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.627: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.639: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.650: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.655: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.672: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.677: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.689: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.695: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.701: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.706: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.710: INFO: Unable to read 10.107.240.249_udp@PTR from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.724: INFO: Unable to read 10.107.240.249_tcp@PTR from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.732: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.741: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.747: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8kfdc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.752: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.757: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.764: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.775: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.781: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.787: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.793: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.799: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.804: INFO: Unable to read 10.107.240.249_udp@PTR from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.812: INFO: Unable to read 10.107.240.249_tcp@PTR from pod e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005: the server could not find the requested resource (get pods dns-test-2d55bedf-2aff-11ea-8970-0242ac110005)
Dec 30 12:23:39.812: INFO: Lookups using e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc wheezy_udp@dns-test-service.e2e-tests-dns-8kfdc.svc wheezy_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.240.249_udp@PTR 10.107.240.249_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8kfdc jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc jessie_udp@dns-test-service.e2e-tests-dns-8kfdc.svc jessie_tcp@dns-test-service.e2e-tests-dns-8kfdc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8kfdc.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8kfdc.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.240.249_udp@PTR 10.107.240.249_tcp@PTR]

Dec 30 12:23:44.941: INFO: DNS probes using e2e-tests-dns-8kfdc/dns-test-2d55bedf-2aff-11ea-8970-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:23:45.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-8kfdc" for this suite.
Dec 30 12:23:53.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:23:53.476: INFO: namespace: e2e-tests-dns-8kfdc, resource: bindings, ignored listing per whitelist
Dec 30 12:23:53.580: INFO: namespace e2e-tests-dns-8kfdc deletion completed in 8.293661085s

• [SLOW TEST:28.468 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:23:53.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-3e582967-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:23:54.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-s6dfh" to be "success or failure"
Dec 30 12:23:54.169: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.826087ms
Dec 30 12:23:56.182: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045713821s
Dec 30 12:23:58.219: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082808053s
Dec 30 12:24:00.301: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165605553s
Dec 30 12:24:02.321: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185261162s
Dec 30 12:24:04.604: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468347008s
STEP: Saw pod success
Dec 30 12:24:04.604: INFO: Pod "pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:24:04.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 12:24:04.905: INFO: Waiting for pod pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:24:04.974: INFO: Pod pod-projected-configmaps-3e6fed93-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:24:04.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s6dfh" for this suite.
Dec 30 12:24:11.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:24:11.215: INFO: namespace: e2e-tests-projected-s6dfh, resource: bindings, ignored listing per whitelist
Dec 30 12:24:11.240: INFO: namespace e2e-tests-projected-s6dfh deletion completed in 6.245811591s

• [SLOW TEST:17.659 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:24:11.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:24:11.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-mvrp7" to be "success or failure"
Dec 30 12:24:11.443: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.197604ms
Dec 30 12:24:13.782: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35694404s
Dec 30 12:24:15.816: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390345055s
Dec 30 12:24:18.297: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.87190161s
Dec 30 12:24:20.314: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.888457297s
Dec 30 12:24:22.333: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907699377s
STEP: Saw pod success
Dec 30 12:24:22.333: INFO: Pod "downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:24:22.337: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:24:23.319: INFO: Waiting for pod downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:24:23.337: INFO: Pod downwardapi-volume-48c52a24-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:24:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mvrp7" for this suite.
Dec 30 12:24:29.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:24:29.481: INFO: namespace: e2e-tests-projected-mvrp7, resource: bindings, ignored listing per whitelist
Dec 30 12:24:29.623: INFO: namespace e2e-tests-projected-mvrp7 deletion completed in 6.270369338s

• [SLOW TEST:18.382 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:24:29.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 30 12:24:29.942: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4n7nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4n7nd/configmaps/e2e-watch-test-watch-closed,UID:53cd7d42-2aff-11ea-a994-fa163e34d433,ResourceVersion:16571256,Generation:0,CreationTimestamp:2019-12-30 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 12:24:29.942: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4n7nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4n7nd/configmaps/e2e-watch-test-watch-closed,UID:53cd7d42-2aff-11ea-a994-fa163e34d433,ResourceVersion:16571257,Generation:0,CreationTimestamp:2019-12-30 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 30 12:24:29.975: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4n7nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4n7nd/configmaps/e2e-watch-test-watch-closed,UID:53cd7d42-2aff-11ea-a994-fa163e34d433,ResourceVersion:16571258,Generation:0,CreationTimestamp:2019-12-30 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 12:24:29.976: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4n7nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4n7nd/configmaps/e2e-watch-test-watch-closed,UID:53cd7d42-2aff-11ea-a994-fa163e34d433,ResourceVersion:16571259,Generation:0,CreationTimestamp:2019-12-30 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:24:29.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4n7nd" for this suite.
Dec 30 12:24:36.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:24:36.124: INFO: namespace: e2e-tests-watch-4n7nd, resource: bindings, ignored listing per whitelist
Dec 30 12:24:36.226: INFO: namespace e2e-tests-watch-4n7nd deletion completed in 6.238284957s

• [SLOW TEST:6.603 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:24:36.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:24:36.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-5kqxk" to be "success or failure"
Dec 30 12:24:36.518: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.859515ms
Dec 30 12:24:38.557: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063343265s
Dec 30 12:24:40.604: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109951248s
Dec 30 12:24:42.620: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126227449s
Dec 30 12:24:45.256: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762059151s
Dec 30 12:24:47.445: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.951193225s
Dec 30 12:24:49.559: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.065553431s
STEP: Saw pod success
Dec 30 12:24:49.559: INFO: Pod "downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:24:49.568: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:24:50.076: INFO: Waiting for pod downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:24:50.115: INFO: Pod downwardapi-volume-57b3a9e8-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:24:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5kqxk" for this suite.
Dec 30 12:24:56.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:24:56.253: INFO: namespace: e2e-tests-downward-api-5kqxk, resource: bindings, ignored listing per whitelist
Dec 30 12:24:56.574: INFO: namespace e2e-tests-downward-api-5kqxk deletion completed in 6.436617731s

• [SLOW TEST:20.348 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:24:56.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:25:56.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-cwxnd" for this suite.
Dec 30 12:26:21.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:26:21.314: INFO: namespace: e2e-tests-container-probe-cwxnd, resource: bindings, ignored listing per whitelist
Dec 30 12:26:21.345: INFO: namespace e2e-tests-container-probe-cwxnd deletion completed in 24.390019213s

• [SLOW TEST:84.771 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:26:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1230 12:26:52.272964       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 12:26:52.273: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:26:52.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xtjtt" for this suite.
Dec 30 12:27:00.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:27:00.996: INFO: namespace: e2e-tests-gc-xtjtt, resource: bindings, ignored listing per whitelist
Dec 30 12:27:00.999: INFO: namespace e2e-tests-gc-xtjtt deletion completed in 8.667531417s

• [SLOW TEST:39.653 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:27:00.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-adff91a5-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:27:01.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-vkwrm" to be "success or failure"
Dec 30 12:27:01.408: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.341085ms
Dec 30 12:27:03.471: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142302605s
Dec 30 12:27:05.487: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157998759s
Dec 30 12:27:07.852: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522638576s
Dec 30 12:27:09.868: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538899219s
Dec 30 12:27:11.899: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569959892s
STEP: Saw pod success
Dec 30 12:27:11.899: INFO: Pod "pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:27:11.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 12:27:12.647: INFO: Waiting for pod pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:27:12.682: INFO: Pod pod-configmaps-ae0231a4-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:27:12.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vkwrm" for this suite.
Dec 30 12:27:18.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:27:18.993: INFO: namespace: e2e-tests-configmap-vkwrm, resource: bindings, ignored listing per whitelist
Dec 30 12:27:19.109: INFO: namespace e2e-tests-configmap-vkwrm deletion completed in 6.319682084s

• [SLOW TEST:18.109 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:27:19.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b8bba149-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:27:19.289: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-p2lfh" to be "success or failure"
Dec 30 12:27:19.386: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.823003ms
Dec 30 12:27:21.522: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233116015s
Dec 30 12:27:23.540: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251294596s
Dec 30 12:27:25.553: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264514316s
Dec 30 12:27:27.879: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589651675s
Dec 30 12:27:29.915: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626305092s
STEP: Saw pod success
Dec 30 12:27:29.915: INFO: Pod "pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:27:29.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 12:27:30.738: INFO: Waiting for pod pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:27:31.091: INFO: Pod pod-projected-secrets-b8bcff0d-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:27:31.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p2lfh" for this suite.
Dec 30 12:27:37.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:27:37.370: INFO: namespace: e2e-tests-projected-p2lfh, resource: bindings, ignored listing per whitelist
Dec 30 12:27:37.457: INFO: namespace e2e-tests-projected-p2lfh deletion completed in 6.35335634s

• [SLOW TEST:18.348 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:27:37.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c3c0e4af-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:27:37.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-64kd9" to be "success or failure"
Dec 30 12:27:37.872: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.070177ms
Dec 30 12:27:39.936: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115740168s
Dec 30 12:27:41.953: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132164587s
Dec 30 12:27:44.096: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275200859s
Dec 30 12:27:46.243: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.422637708s
Dec 30 12:27:48.381: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.560886058s
STEP: Saw pod success
Dec 30 12:27:48.381: INFO: Pod "pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:27:48.393: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 12:27:49.206: INFO: Waiting for pod pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:27:49.232: INFO: Pod pod-configmaps-c3c8c328-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:27:49.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-64kd9" for this suite.
Dec 30 12:27:55.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:27:55.358: INFO: namespace: e2e-tests-configmap-64kd9, resource: bindings, ignored listing per whitelist
Dec 30 12:27:55.428: INFO: namespace e2e-tests-configmap-64kd9 deletion completed in 6.183157295s

• [SLOW TEST:17.971 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:27:55.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 30 12:27:55.623: INFO: Waiting up to 5m0s for pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-w2j8l" to be "success or failure"
Dec 30 12:27:55.655: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.494818ms
Dec 30 12:27:57.951: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328297522s
Dec 30 12:27:59.975: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351539536s
Dec 30 12:28:02.044: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421262682s
Dec 30 12:28:04.074: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450571117s
Dec 30 12:28:06.096: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47301088s
STEP: Saw pod success
Dec 30 12:28:06.096: INFO: Pod "pod-ce5f333a-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:28:06.103: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ce5f333a-2aff-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:28:06.296: INFO: Waiting for pod pod-ce5f333a-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:28:07.215: INFO: Pod pod-ce5f333a-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:28:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w2j8l" for this suite.
Dec 30 12:28:15.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:28:15.554: INFO: namespace: e2e-tests-emptydir-w2j8l, resource: bindings, ignored listing per whitelist
Dec 30 12:28:15.642: INFO: namespace e2e-tests-emptydir-w2j8l deletion completed in 8.407506032s

• [SLOW TEST:20.214 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:28:15.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-da6ef7d1-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:28:15.806: INFO: Waiting up to 5m0s for pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-cr2jr" to be "success or failure"
Dec 30 12:28:15.848: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.156898ms
Dec 30 12:28:18.528: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721627327s
Dec 30 12:28:20.555: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.7480547s
Dec 30 12:28:22.603: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796179592s
Dec 30 12:28:24.661: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.854615563s
STEP: Saw pod success
Dec 30 12:28:24.661: INFO: Pod "pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:28:24.678: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 30 12:28:24.985: INFO: Waiting for pod pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:28:24.994: INFO: Pod pod-secrets-da6f6733-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:28:24.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cr2jr" for this suite.
Dec 30 12:28:31.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:28:31.294: INFO: namespace: e2e-tests-secrets-cr2jr, resource: bindings, ignored listing per whitelist
Dec 30 12:28:31.316: INFO: namespace e2e-tests-secrets-cr2jr deletion completed in 6.313549063s

• [SLOW TEST:15.673 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:28:31.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:28:31.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-fknws" to be "success or failure"
Dec 30 12:28:31.864: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 194.038434ms
Dec 30 12:28:34.057: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3868784s
Dec 30 12:28:36.096: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425782601s
Dec 30 12:28:38.106: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436395088s
Dec 30 12:28:40.120: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.44997836s
Dec 30 12:28:42.136: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.465774181s
STEP: Saw pod success
Dec 30 12:28:42.136: INFO: Pod "downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:28:42.139: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:28:42.272: INFO: Waiting for pod downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:28:42.283: INFO: Pod downwardapi-volume-e3dbedeb-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:28:42.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fknws" for this suite.
Dec 30 12:28:48.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:28:48.494: INFO: namespace: e2e-tests-downward-api-fknws, resource: bindings, ignored listing per whitelist
Dec 30 12:28:48.557: INFO: namespace e2e-tests-downward-api-fknws deletion completed in 6.267929843s

• [SLOW TEST:17.240 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:28:48.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:29:01.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mngzd" for this suite.
Dec 30 12:29:07.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:29:07.976: INFO: namespace: e2e-tests-kubelet-test-mngzd, resource: bindings, ignored listing per whitelist
Dec 30 12:29:08.137: INFO: namespace e2e-tests-kubelet-test-mngzd deletion completed in 6.98004288s

• [SLOW TEST:19.579 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:29:08.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f9c8b5ba-2aff-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:29:08.817: INFO: Waiting up to 5m0s for pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-hthdh" to be "success or failure"
Dec 30 12:29:08.881: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.164779ms
Dec 30 12:29:11.220: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403120809s
Dec 30 12:29:13.229: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412092138s
Dec 30 12:29:15.264: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446945888s
Dec 30 12:29:17.285: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4676763s
Dec 30 12:29:19.305: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.487736681s
STEP: Saw pod success
Dec 30 12:29:19.305: INFO: Pod "pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:29:19.339: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:29:20.395: INFO: Waiting for pod pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005 to disappear
Dec 30 12:29:20.708: INFO: Pod pod-secrets-fa05c36f-2aff-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:29:20.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hthdh" for this suite.
Dec 30 12:29:26.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:29:27.167: INFO: namespace: e2e-tests-secrets-hthdh, resource: bindings, ignored listing per whitelist
Dec 30 12:29:27.204: INFO: namespace e2e-tests-secrets-hthdh deletion completed in 6.410465464s
STEP: Destroying namespace "e2e-tests-secret-namespace-h9d48" for this suite.
Dec 30 12:29:33.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:29:33.360: INFO: namespace: e2e-tests-secret-namespace-h9d48, resource: bindings, ignored listing per whitelist
Dec 30 12:29:33.427: INFO: namespace e2e-tests-secret-namespace-h9d48 deletion completed in 6.223189696s

• [SLOW TEST:25.289 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:29:33.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 30 12:29:33.800: INFO: Waiting up to 5m0s for pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-8b8sg" to be "success or failure"
Dec 30 12:29:34.009: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 209.075167ms
Dec 30 12:29:36.180: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379582578s
Dec 30 12:29:38.200: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399961542s
Dec 30 12:29:40.712: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912243845s
Dec 30 12:29:42.733: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932491054s
Dec 30 12:29:44.833: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.032634541s
STEP: Saw pod success
Dec 30 12:29:44.833: INFO: Pod "pod-08e9d5d3-2b00-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:29:44.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-08e9d5d3-2b00-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:29:45.027: INFO: Waiting for pod pod-08e9d5d3-2b00-11ea-8970-0242ac110005 to disappear
Dec 30 12:29:45.043: INFO: Pod pod-08e9d5d3-2b00-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:29:45.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8b8sg" for this suite.
Dec 30 12:29:51.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:29:51.537: INFO: namespace: e2e-tests-emptydir-8b8sg, resource: bindings, ignored listing per whitelist
Dec 30 12:29:51.537: INFO: namespace e2e-tests-emptydir-8b8sg deletion completed in 6.475821556s

• [SLOW TEST:18.110 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:29:51.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:29:51.911: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 30 12:29:56.936: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 30 12:30:02.960: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 30 12:30:03.021: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-chr22,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-chr22/deployments/test-cleanup-deployment,UID:1a515e4e-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572020,Generation:1,CreationTimestamp:2019-12-30 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 30 12:30:03.177: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:30:03.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-chr22" for this suite.
Dec 30 12:30:17.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:30:17.637: INFO: namespace: e2e-tests-deployment-chr22, resource: bindings, ignored listing per whitelist
Dec 30 12:30:17.687: INFO: namespace e2e-tests-deployment-chr22 deletion completed in 14.435919461s

• [SLOW TEST:26.150 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:30:17.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:30:17.924: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 30 12:30:17.947: INFO: Number of nodes with available pods: 0
Dec 30 12:30:17.947: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 30 12:30:18.208: INFO: Number of nodes with available pods: 0
Dec 30 12:30:18.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:19.227: INFO: Number of nodes with available pods: 0
Dec 30 12:30:19.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:20.252: INFO: Number of nodes with available pods: 0
Dec 30 12:30:20.253: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:21.254: INFO: Number of nodes with available pods: 0
Dec 30 12:30:21.254: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:22.236: INFO: Number of nodes with available pods: 0
Dec 30 12:30:22.236: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:23.644: INFO: Number of nodes with available pods: 0
Dec 30 12:30:23.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:24.251: INFO: Number of nodes with available pods: 0
Dec 30 12:30:24.252: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:25.233: INFO: Number of nodes with available pods: 0
Dec 30 12:30:25.233: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:26.226: INFO: Number of nodes with available pods: 0
Dec 30 12:30:26.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:27.221: INFO: Number of nodes with available pods: 1
Dec 30 12:30:27.221: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 30 12:30:27.282: INFO: Number of nodes with available pods: 1
Dec 30 12:30:27.282: INFO: Number of running nodes: 0, number of available pods: 1
Dec 30 12:30:28.299: INFO: Number of nodes with available pods: 0
Dec 30 12:30:28.299: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 30 12:30:28.337: INFO: Number of nodes with available pods: 0
Dec 30 12:30:28.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:29.353: INFO: Number of nodes with available pods: 0
Dec 30 12:30:29.353: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:30.354: INFO: Number of nodes with available pods: 0
Dec 30 12:30:30.355: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:31.349: INFO: Number of nodes with available pods: 0
Dec 30 12:30:31.350: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:32.378: INFO: Number of nodes with available pods: 0
Dec 30 12:30:32.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:33.352: INFO: Number of nodes with available pods: 0
Dec 30 12:30:33.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:34.359: INFO: Number of nodes with available pods: 0
Dec 30 12:30:34.359: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:35.352: INFO: Number of nodes with available pods: 0
Dec 30 12:30:35.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:36.347: INFO: Number of nodes with available pods: 0
Dec 30 12:30:36.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:37.628: INFO: Number of nodes with available pods: 0
Dec 30 12:30:37.629: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:38.357: INFO: Number of nodes with available pods: 0
Dec 30 12:30:38.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:39.355: INFO: Number of nodes with available pods: 0
Dec 30 12:30:39.355: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:40.957: INFO: Number of nodes with available pods: 0
Dec 30 12:30:40.957: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:41.356: INFO: Number of nodes with available pods: 0
Dec 30 12:30:41.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:42.354: INFO: Number of nodes with available pods: 0
Dec 30 12:30:42.354: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:43.345: INFO: Number of nodes with available pods: 0
Dec 30 12:30:43.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 30 12:30:44.354: INFO: Number of nodes with available pods: 1
Dec 30 12:30:44.354: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vr9kp, will wait for the garbage collector to delete the pods
Dec 30 12:30:44.449: INFO: Deleting DaemonSet.extensions daemon-set took: 28.19673ms
Dec 30 12:30:44.550: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.407595ms
Dec 30 12:31:02.906: INFO: Number of nodes with available pods: 0
Dec 30 12:31:02.906: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 12:31:02.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vr9kp/daemonsets","resourceVersion":"16572172"},"items":null}

Dec 30 12:31:02.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vr9kp/pods","resourceVersion":"16572172"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:31:02.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vr9kp" for this suite.
Dec 30 12:31:08.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:31:09.041: INFO: namespace: e2e-tests-daemonsets-vr9kp, resource: bindings, ignored listing per whitelist
Dec 30 12:31:09.125: INFO: namespace e2e-tests-daemonsets-vr9kp deletion completed in 6.158772669s

• [SLOW TEST:51.437 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:31:09.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-np5kz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-np5kz to expose endpoints map[]
Dec 30 12:31:09.348: INFO: Get endpoints failed (13.147884ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 30 12:31:10.427: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-np5kz exposes endpoints map[] (1.092125582s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-np5kz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-np5kz to expose endpoints map[pod1:[80]]
Dec 30 12:31:14.863: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.414246876s elapsed, will retry)
Dec 30 12:31:20.778: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-np5kz exposes endpoints map[pod1:[80]] (10.329187372s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-np5kz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-np5kz to expose endpoints map[pod1:[80] pod2:[80]]
Dec 30 12:31:26.496: INFO: Unexpected endpoints: found map[4287c35e-2b00-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.682774261s elapsed, will retry)
Dec 30 12:31:29.675: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-np5kz exposes endpoints map[pod2:[80] pod1:[80]] (8.861617941s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-np5kz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-np5kz to expose endpoints map[pod2:[80]]
Dec 30 12:31:30.890: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-np5kz exposes endpoints map[pod2:[80]] (1.168609712s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-np5kz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-np5kz to expose endpoints map[]
Dec 30 12:31:32.032: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-np5kz exposes endpoints map[] (1.131331554s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:31:34.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-np5kz" for this suite.
Dec 30 12:31:58.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:31:58.589: INFO: namespace: e2e-tests-services-np5kz, resource: bindings, ignored listing per whitelist
Dec 30 12:31:58.629: INFO: namespace e2e-tests-services-np5kz deletion completed in 24.506154977s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.504 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:31:58.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:31:58.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:32:09.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7m662" for this suite.
Dec 30 12:32:55.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:32:55.491: INFO: namespace: e2e-tests-pods-7m662, resource: bindings, ignored listing per whitelist
Dec 30 12:32:55.553: INFO: namespace e2e-tests-pods-7m662 deletion completed in 46.181885843s

• [SLOW TEST:56.923 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:32:55.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 30 12:32:56.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rspj6,SelfLink:/api/v1/namespaces/e2e-tests-watch-rspj6/configmaps/e2e-watch-test-resource-version,UID:81690b0d-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572410,Generation:0,CreationTimestamp:2019-12-30 12:32:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 12:32:56.054: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rspj6,SelfLink:/api/v1/namespaces/e2e-tests-watch-rspj6/configmaps/e2e-watch-test-resource-version,UID:81690b0d-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572411,Generation:0,CreationTimestamp:2019-12-30 12:32:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:32:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-rspj6" for this suite.
Dec 30 12:33:02.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:33:02.186: INFO: namespace: e2e-tests-watch-rspj6, resource: bindings, ignored listing per whitelist
Dec 30 12:33:02.287: INFO: namespace e2e-tests-watch-rspj6 deletion completed in 6.228225548s

• [SLOW TEST:6.734 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:33:02.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:33:02.629: INFO: Creating deployment "nginx-deployment"
Dec 30 12:33:02.640: INFO: Waiting for observed generation 1
Dec 30 12:33:04.902: INFO: Waiting for all required pods to come up
Dec 30 12:33:04.941: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 30 12:33:41.680: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 30 12:33:41.732: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 30 12:33:41.753: INFO: Updating deployment nginx-deployment
Dec 30 12:33:41.753: INFO: Waiting for observed generation 2
Dec 30 12:33:44.430: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 30 12:33:44.453: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 30 12:33:45.968: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 30 12:33:46.751: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 30 12:33:46.751: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 30 12:33:46.773: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 30 12:33:47.468: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 30 12:33:47.468: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 30 12:33:47.505: INFO: Updating deployment nginx-deployment
Dec 30 12:33:47.505: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 30 12:33:49.606: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 30 12:33:51.627: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 30 12:33:52.329: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ftk9q/deployments/nginx-deployment,UID:8566704f-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572652,Generation:3,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-30 12:33:43 +0000 UTC 2019-12-30 12:33:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-30 12:33:50 +0000 UTC 2019-12-30 12:33:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 30 12:33:52.889: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ftk9q/replicasets/nginx-deployment-5c98f8fb5,UID:9cb76d50-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572647,Generation:3,CreationTimestamp:2019-12-30 12:33:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8566704f-2b00-11ea-a994-fa163e34d433 0xc0026b0ae7 0xc0026b0ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 12:33:52.889: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 30 12:33:52.890: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ftk9q/replicasets/nginx-deployment-85ddf47c5d,UID:8569ed60-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572642,Generation:3,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8566704f-2b00-11ea-a994-fa163e34d433 0xc0026b0e57 0xc0026b0e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 30 12:33:53.679: INFO: Pod "nginx-deployment-5c98f8fb5-2bfzc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2bfzc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-2bfzc,UID:9cd29a86-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572616,Generation:0,CreationTimestamp:2019-12-30 12:33:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0022a8397 0xc0022a8398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a84a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a84c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:33:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.679: INFO: Pod "nginx-deployment-5c98f8fb5-2st28" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2st28,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-2st28,UID:9cc9f273-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572632,Generation:0,CreationTimestamp:2019-12-30 12:33:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0022a8947 0xc0022a8948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a89c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a89e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:33:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.679: INFO: Pod "nginx-deployment-5c98f8fb5-46h7n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-46h7n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-46h7n,UID:9cd274cb-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572635,Generation:0,CreationTimestamp:2019-12-30 12:33:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0022a9527 0xc0022a9528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a9590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a95b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:33:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.680: INFO: Pod "nginx-deployment-5c98f8fb5-5dws5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5dws5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-5dws5,UID:9d512eee-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572643,Generation:0,CreationTimestamp:2019-12-30 12:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0022a9677 0xc0022a9678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a99b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a99d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:33:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.680: INFO: Pod "nginx-deployment-5c98f8fb5-84m4r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-84m4r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-84m4r,UID:a3058e12-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572667,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0022a9f97 0xc0022a9f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.680: INFO: Pod "nginx-deployment-5c98f8fb5-frmtg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-frmtg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-frmtg,UID:9d40add1-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572640,Generation:0,CreationTimestamp:2019-12-30 12:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc0025860b7 0xc0025860b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-30 12:33:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.681: INFO: Pod "nginx-deployment-5c98f8fb5-hnnhw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hnnhw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-hnnhw,UID:a35c708c-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572669,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc002586227 0xc002586228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025862c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.681: INFO: Pod "nginx-deployment-5c98f8fb5-kzmcn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kzmcn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-5c98f8fb5-kzmcn,UID:a35c54d8-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572668,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9cb76d50-2b00-11ea-a994-fa163e34d433 0xc002586350 0xc002586351}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025863c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025863e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.682: INFO: Pod "nginx-deployment-85ddf47c5d-227dj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-227dj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-227dj,UID:85842d04-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572569,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586440 0xc002586441}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025864b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025864e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://129d628f4645949538893a9307037b2191ade970b04fbde8c1766f705e401bef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.682: INFO: Pod "nginx-deployment-85ddf47c5d-6w598" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6w598,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-6w598,UID:a35f2dfb-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572661,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc0025865a7 0xc0025865a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.682: INFO: Pod "nginx-deployment-85ddf47c5d-bbvps" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bbvps,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-bbvps,UID:858448d3-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572573,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc0025866c0 0xc0025866c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://26f5bb9488077a61a4933a67a5d273d3934d4022458a6e3e03ea69594f2c095e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.682: INFO: Pod "nginx-deployment-85ddf47c5d-d9csb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d9csb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-d9csb,UID:a2b379a8-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572655,Generation:0,CreationTimestamp:2019-12-30 12:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586807 0xc002586808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.683: INFO: Pod "nginx-deployment-85ddf47c5d-fpqpw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fpqpw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-fpqpw,UID:857da5a4-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572565,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586907 0xc002586908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://672ee4dbd92408fb68e3e429ae32c9cbde4a0ba038d50fb39e92931469a486a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.683: INFO: Pod "nginx-deployment-85ddf47c5d-glv2w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-glv2w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-glv2w,UID:8570f1d9-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572556,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586a57 0xc002586a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-30 12:33:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://74985ebabf745f23fc865a5d111336aabce387f6504a29b03b73b6261bc84446}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.684: INFO: Pod "nginx-deployment-85ddf47c5d-k2z4n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k2z4n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-k2z4n,UID:857d9fc4-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572551,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586bb7 0xc002586bb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586c20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://444d4630ae702a945c1f7a2151ba17758496dba393790709669329be5aff2e12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.684: INFO: Pod "nginx-deployment-85ddf47c5d-kcz4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kcz4v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-kcz4v,UID:a2ffbec1-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572663,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586d27 0xc002586d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.685: INFO: Pod "nginx-deployment-85ddf47c5d-lhzkd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lhzkd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-lhzkd,UID:85846c8e-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572540,Generation:0,CreationTimestamp:2019-12-30 12:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586e27 0xc002586e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002586eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002586ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://204822a144f9d39aa64eaab68bd0c3679e88deb869bba793a95fd4abb681a742}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.685: INFO: Pod "nginx-deployment-85ddf47c5d-m55hn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m55hn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-m55hn,UID:a35e630c-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572665,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002586fb7 0xc002586fb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025870b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025870d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.685: INFO: Pod "nginx-deployment-85ddf47c5d-mrzv9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mrzv9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-mrzv9,UID:85a3a5df-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572561,Generation:0,CreationTimestamp:2019-12-30 12:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002587130 0xc002587131}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025871a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025871c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-30 12:33:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://98e3d82007da79911999106c7b5f1cb3e9a36547252c5d694aee611b81c51112}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.686: INFO: Pod "nginx-deployment-85ddf47c5d-pq7qp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pq7qp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-pq7qp,UID:a35f3516-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572670,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc0025872b7 0xc0025872b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002587320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002587340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.686: INFO: Pod "nginx-deployment-85ddf47c5d-sk7cm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sk7cm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-sk7cm,UID:85a404da-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572554,Generation:0,CreationTimestamp:2019-12-30 12:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc0025873a0 0xc0025873a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002587400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025874e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-30 12:33:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 12:33:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9f3dab5f2d54bd50568120a63d975473583dd73617b8709ce44b154a69d31223}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.687: INFO: Pod "nginx-deployment-85ddf47c5d-t4cmf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t4cmf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-t4cmf,UID:a2ff70f5-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572662,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002587717 0xc002587718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002587780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025877a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 12:33:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 12:33:53.687: INFO: Pod "nginx-deployment-85ddf47c5d-z9rqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z9rqg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ftk9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ftk9q/pods/nginx-deployment-85ddf47c5d-z9rqg,UID:a35e192f-2b00-11ea-a994-fa163e34d433,ResourceVersion:16572664,Generation:0,CreationTimestamp:2019-12-30 12:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8569ed60-2b00-11ea-a994-fa163e34d433 0xc002587817 0xc002587818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ck9j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ck9j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9ck9j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002587880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025878a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:33:53.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ftk9q" for this suite.
Dec 30 12:34:46.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:34:46.520: INFO: namespace: e2e-tests-deployment-ftk9q, resource: bindings, ignored listing per whitelist
Dec 30 12:34:46.612: INFO: namespace e2e-tests-deployment-ftk9q deletion completed in 52.744348091s

• [SLOW TEST:104.324 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:34:46.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-c4a7eeb0-2b00-11ea-8970-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-c4a7ef5f-2b00-11ea-8970-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c4a7eeb0-2b00-11ea-8970-0242ac110005
STEP: Updating configmap cm-test-opt-upd-c4a7ef5f-2b00-11ea-8970-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-c4a7ef92-2b00-11ea-8970-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:35:21.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6xghd" for this suite.
Dec 30 12:35:45.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:35:45.712: INFO: namespace: e2e-tests-configmap-6xghd, resource: bindings, ignored listing per whitelist
Dec 30 12:35:45.783: INFO: namespace e2e-tests-configmap-6xghd deletion completed in 24.223306031s

• [SLOW TEST:59.171 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:35:45.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:35:56.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-nfz44" for this suite.
Dec 30 12:36:44.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:36:44.640: INFO: namespace: e2e-tests-kubelet-test-nfz44, resource: bindings, ignored listing per whitelist
Dec 30 12:36:44.666: INFO: namespace e2e-tests-kubelet-test-nfz44 deletion completed in 48.232665188s

• [SLOW TEST:58.883 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:36:44.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 30 12:37:09.083: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:09.083: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:09.520: INFO: Exec stderr: ""
Dec 30 12:37:09.520: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:09.520: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:09.884: INFO: Exec stderr: ""
Dec 30 12:37:09.884: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:09.884: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:10.277: INFO: Exec stderr: ""
Dec 30 12:37:10.277: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:10.277: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:10.684: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 30 12:37:10.684: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:10.684: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:10.959: INFO: Exec stderr: ""
Dec 30 12:37:10.959: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:10.959: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:11.311: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 30 12:37:11.311: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:11.311: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:11.657: INFO: Exec stderr: ""
Dec 30 12:37:11.657: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:11.657: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:12.005: INFO: Exec stderr: ""
Dec 30 12:37:12.006: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:12.006: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:12.565: INFO: Exec stderr: ""
Dec 30 12:37:12.566: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8bm4q PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:37:12.566: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:37:13.138: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:37:13.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-8bm4q" for this suite.
Dec 30 12:38:03.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:38:03.515: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-8bm4q, resource: bindings, ignored listing per whitelist
Dec 30 12:38:03.599: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-8bm4q deletion completed in 50.449237661s

• [SLOW TEST:78.932 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:38:03.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:38:03.956: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:38:05.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-8gsr9" for this suite.
Dec 30 12:38:11.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:38:11.409: INFO: namespace: e2e-tests-custom-resource-definition-8gsr9, resource: bindings, ignored listing per whitelist
Dec 30 12:38:11.465: INFO: namespace e2e-tests-custom-resource-definition-8gsr9 deletion completed in 6.212074518s

• [SLOW TEST:7.865 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:38:11.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:38:23.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-vgfq7" for this suite.
Dec 30 12:39:15.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:39:15.959: INFO: namespace: e2e-tests-kubelet-test-vgfq7, resource: bindings, ignored listing per whitelist
Dec 30 12:39:15.981: INFO: namespace e2e-tests-kubelet-test-vgfq7 deletion completed in 52.212822509s

• [SLOW TEST:64.516 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:39:15.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 30 12:39:16.164: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 30 12:39:21.181: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:39:23.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-m7dqq" for this suite.
Dec 30 12:39:34.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:39:34.389: INFO: namespace: e2e-tests-replication-controller-m7dqq, resource: bindings, ignored listing per whitelist
Dec 30 12:39:34.652: INFO: namespace e2e-tests-replication-controller-m7dqq deletion completed in 11.129401938s

• [SLOW TEST:18.672 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:39:34.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:39:35.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-v2kr8" for this suite.
Dec 30 12:39:42.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:39:42.175: INFO: namespace: e2e-tests-kubelet-test-v2kr8, resource: bindings, ignored listing per whitelist
Dec 30 12:39:42.254: INFO: namespace e2e-tests-kubelet-test-v2kr8 deletion completed in 7.037638109s

• [SLOW TEST:7.601 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:39:42.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 30 12:39:42.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573509,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 12:39:42.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573509,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 30 12:39:52.720: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573522,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 30 12:39:52.720: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573522,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 30 12:40:02.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573535,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 12:40:02.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573535,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 30 12:40:12.766: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573547,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 12:40:12.766: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-a,UID:73caed43-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573547,Generation:0,CreationTimestamp:2019-12-30 12:39:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 30 12:40:22.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-b,UID:8bbe50ac-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573560,Generation:0,CreationTimestamp:2019-12-30 12:40:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 12:40:22.802: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-b,UID:8bbe50ac-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573560,Generation:0,CreationTimestamp:2019-12-30 12:40:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 30 12:40:32.820: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-b,UID:8bbe50ac-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573573,Generation:0,CreationTimestamp:2019-12-30 12:40:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 12:40:32.820: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tfmd,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tfmd/configmaps/e2e-watch-test-configmap-b,UID:8bbe50ac-2b01-11ea-a994-fa163e34d433,ResourceVersion:16573573,Generation:0,CreationTimestamp:2019-12-30 12:40:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:40:42.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2tfmd" for this suite.
Dec 30 12:40:48.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:40:49.044: INFO: namespace: e2e-tests-watch-2tfmd, resource: bindings, ignored listing per whitelist
Dec 30 12:40:49.087: INFO: namespace e2e-tests-watch-2tfmd deletion completed in 6.247807145s

• [SLOW TEST:66.833 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:40:49.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 30 12:40:49.357: INFO: Waiting up to 5m0s for pod "pod-9b91f708-2b01-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-47d9t" to be "success or failure"
Dec 30 12:40:49.519: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 161.560114ms
Dec 30 12:40:51.540: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183166802s
Dec 30 12:40:53.559: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201576218s
Dec 30 12:40:55.760: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403386618s
Dec 30 12:40:57.780: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42343363s
Dec 30 12:40:59.793: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.436044527s
STEP: Saw pod success
Dec 30 12:40:59.793: INFO: Pod "pod-9b91f708-2b01-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:40:59.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9b91f708-2b01-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:41:00.673: INFO: Waiting for pod pod-9b91f708-2b01-11ea-8970-0242ac110005 to disappear
Dec 30 12:41:00.685: INFO: Pod pod-9b91f708-2b01-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:41:00.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-47d9t" for this suite.
Dec 30 12:41:06.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:41:06.872: INFO: namespace: e2e-tests-emptydir-47d9t, resource: bindings, ignored listing per whitelist
Dec 30 12:41:06.961: INFO: namespace e2e-tests-emptydir-47d9t deletion completed in 6.267273627s

• [SLOW TEST:17.874 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:41:06.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 30 12:41:17.782: INFO: Successfully updated pod "annotationupdatea62f599b-2b01-11ea-8970-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:41:19.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rbvbj" for this suite.
Dec 30 12:41:41.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:41:42.008: INFO: namespace: e2e-tests-projected-rbvbj, resource: bindings, ignored listing per whitelist
Dec 30 12:41:42.082: INFO: namespace e2e-tests-projected-rbvbj deletion completed in 22.191233308s

• [SLOW TEST:35.121 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:41:42.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4zmrl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 30 12:41:42.309: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 30 12:42:14.527: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-4zmrl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 12:42:14.527: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 12:42:15.060: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:42:15.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-4zmrl" for this suite.
Dec 30 12:42:39.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:42:39.216: INFO: namespace: e2e-tests-pod-network-test-4zmrl, resource: bindings, ignored listing per whitelist
Dec 30 12:42:39.320: INFO: namespace e2e-tests-pod-network-test-4zmrl deletion completed in 24.242822019s

• [SLOW TEST:57.237 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:42:39.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:43:11.792: INFO: Container started at 2019-12-30 12:42:47 +0000 UTC, pod became ready at 2019-12-30 12:43:09 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:43:11.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-p8xbh" for this suite.
Dec 30 12:43:35.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:43:36.044: INFO: namespace: e2e-tests-container-probe-p8xbh, resource: bindings, ignored listing per whitelist
Dec 30 12:43:36.106: INFO: namespace e2e-tests-container-probe-p8xbh deletion completed in 24.275115617s

• [SLOW TEST:56.784 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:43:36.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 30 12:43:36.289: INFO: Waiting up to 5m0s for pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005" in namespace "e2e-tests-containers-8qlvz" to be "success or failure"
Dec 30 12:43:36.301: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.731258ms
Dec 30 12:43:38.319: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029790611s
Dec 30 12:43:40.335: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045803902s
Dec 30 12:43:42.348: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059072447s
Dec 30 12:43:44.367: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077643051s
Dec 30 12:43:46.403: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114070359s
STEP: Saw pod success
Dec 30 12:43:46.403: INFO: Pod "client-containers-ff143cfe-2b01-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:43:46.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ff143cfe-2b01-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:43:47.033: INFO: Waiting for pod client-containers-ff143cfe-2b01-11ea-8970-0242ac110005 to disappear
Dec 30 12:43:47.090: INFO: Pod client-containers-ff143cfe-2b01-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:43:47.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8qlvz" for this suite.
Dec 30 12:43:53.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:43:53.391: INFO: namespace: e2e-tests-containers-8qlvz, resource: bindings, ignored listing per whitelist
Dec 30 12:43:53.417: INFO: namespace e2e-tests-containers-8qlvz deletion completed in 6.311015694s

• [SLOW TEST:17.311 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:43:53.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 12:43:53.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8jphd'
Dec 30 12:43:55.635: INFO: stderr: ""
Dec 30 12:43:55.635: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 30 12:43:55.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8jphd'
Dec 30 12:44:02.752: INFO: stderr: ""
Dec 30 12:44:02.753: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:44:02.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8jphd" for this suite.
Dec 30 12:44:08.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:44:08.916: INFO: namespace: e2e-tests-kubectl-8jphd, resource: bindings, ignored listing per whitelist
Dec 30 12:44:09.012: INFO: namespace e2e-tests-kubectl-8jphd deletion completed in 6.240723976s

• [SLOW TEST:15.596 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:44:09.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-12ab019e-2b02-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:44:09.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-cbcf2" to be "success or failure"
Dec 30 12:44:09.223: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370879ms
Dec 30 12:44:11.235: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020601872s
Dec 30 12:44:13.252: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03774578s
Dec 30 12:44:15.601: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386982171s
Dec 30 12:44:17.870: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65563606s
Dec 30 12:44:19.936: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.721814593s
STEP: Saw pod success
Dec 30 12:44:19.936: INFO: Pod "pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:44:19.960: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 30 12:44:20.120: INFO: Waiting for pod pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:44:20.867: INFO: Pod pod-configmaps-12ac390f-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:44:20.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cbcf2" for this suite.
Dec 30 12:44:27.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:44:27.385: INFO: namespace: e2e-tests-configmap-cbcf2, resource: bindings, ignored listing per whitelist
Dec 30 12:44:27.470: INFO: namespace e2e-tests-configmap-cbcf2 deletion completed in 6.553731484s

• [SLOW TEST:18.458 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:44:27.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:44:37.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-f25fd" for this suite.
Dec 30 12:45:25.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:45:26.008: INFO: namespace: e2e-tests-kubelet-test-f25fd, resource: bindings, ignored listing per whitelist
Dec 30 12:45:26.008: INFO: namespace e2e-tests-kubelet-test-f25fd deletion completed in 48.245386247s

• [SLOW TEST:58.538 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:45:26.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 12:45:26.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-9zch4'
Dec 30 12:45:26.388: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 12:45:26.388: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 30 12:45:30.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9zch4'
Dec 30 12:45:30.883: INFO: stderr: ""
Dec 30 12:45:30.883: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:45:30.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9zch4" for this suite.
Dec 30 12:45:37.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:45:37.147: INFO: namespace: e2e-tests-kubectl-9zch4, resource: bindings, ignored listing per whitelist
Dec 30 12:45:37.216: INFO: namespace e2e-tests-kubectl-9zch4 deletion completed in 6.308721597s

• [SLOW TEST:11.207 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:45:37.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 12:45:47.657: INFO: Waiting up to 5m0s for pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-pods-gfcgv" to be "success or failure"
Dec 30 12:45:47.683: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.481968ms
Dec 30 12:45:49.692: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034812207s
Dec 30 12:45:51.731: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074534814s
Dec 30 12:45:53.797: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139969864s
Dec 30 12:45:55.809: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152639694s
STEP: Saw pod success
Dec 30 12:45:55.809: INFO: Pod "client-envvars-4d56e848-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:45:55.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-4d56e848-2b02-11ea-8970-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 30 12:45:56.140: INFO: Waiting for pod client-envvars-4d56e848-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:45:56.152: INFO: Pod client-envvars-4d56e848-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:45:56.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gfcgv" for this suite.
Dec 30 12:46:38.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:46:38.302: INFO: namespace: e2e-tests-pods-gfcgv, resource: bindings, ignored listing per whitelist
Dec 30 12:46:38.347: INFO: namespace e2e-tests-pods-gfcgv deletion completed in 42.188427801s

• [SLOW TEST:61.130 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:46:38.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-6baec6b4-2b02-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:46:38.590: INFO: Waiting up to 5m0s for pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-hjrp4" to be "success or failure"
Dec 30 12:46:38.618: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.263769ms
Dec 30 12:46:40.667: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076422877s
Dec 30 12:46:42.676: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085320187s
Dec 30 12:46:44.695: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104845507s
Dec 30 12:46:46.706: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115382965s
Dec 30 12:46:48.743: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152470096s
Dec 30 12:46:50.763: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.172559943s
STEP: Saw pod success
Dec 30 12:46:50.763: INFO: Pod "pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:46:50.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:46:50.843: INFO: Waiting for pod pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:46:50.852: INFO: Pod pod-secrets-6bb8085c-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:46:50.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hjrp4" for this suite.
Dec 30 12:46:57.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:46:57.170: INFO: namespace: e2e-tests-secrets-hjrp4, resource: bindings, ignored listing per whitelist
Dec 30 12:46:57.268: INFO: namespace e2e-tests-secrets-hjrp4 deletion completed in 6.252358676s

• [SLOW TEST:18.921 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:46:57.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:46:57.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-5qzh9" to be "success or failure"
Dec 30 12:46:57.625: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.211976ms
Dec 30 12:46:59.660: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082729998s
Dec 30 12:47:01.689: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111449203s
Dec 30 12:47:03.718: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141199223s
Dec 30 12:47:06.379: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801964792s
Dec 30 12:47:08.400: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.823234134s
STEP: Saw pod success
Dec 30 12:47:08.400: INFO: Pod "downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:47:08.406: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:47:08.536: INFO: Waiting for pod downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:47:08.616: INFO: Pod downwardapi-volume-770f0882-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:47:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5qzh9" for this suite.
Dec 30 12:47:14.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:47:14.745: INFO: namespace: e2e-tests-projected-5qzh9, resource: bindings, ignored listing per whitelist
Dec 30 12:47:14.754: INFO: namespace e2e-tests-projected-5qzh9 deletion completed in 6.125669414s

• [SLOW TEST:17.486 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:47:14.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 30 12:47:15.031: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:47:35.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-tmnpm" for this suite.
Dec 30 12:48:00.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:48:00.384: INFO: namespace: e2e-tests-init-container-tmnpm, resource: bindings, ignored listing per whitelist
Dec 30 12:48:00.462: INFO: namespace e2e-tests-init-container-tmnpm deletion completed in 24.48531179s

• [SLOW TEST:45.708 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:48:00.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mpgmj
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-mpgmj
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-mpgmj
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-mpgmj
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-mpgmj
Dec 30 12:48:13.175: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mpgmj, name: ss-0, uid: a415895c-2b02-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 30 12:48:13.291: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mpgmj, name: ss-0, uid: a415895c-2b02-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 30 12:48:13.323: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mpgmj, name: ss-0, uid: a415895c-2b02-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 30 12:48:13.335: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-mpgmj
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-mpgmj
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-mpgmj and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 30 12:48:27.081: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mpgmj
Dec 30 12:48:27.092: INFO: Scaling statefulset ss to 0
Dec 30 12:48:37.169: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 12:48:37.175: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:48:37.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mpgmj" for this suite.
Dec 30 12:48:43.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:48:43.473: INFO: namespace: e2e-tests-statefulset-mpgmj, resource: bindings, ignored listing per whitelist
Dec 30 12:48:43.593: INFO: namespace e2e-tests-statefulset-mpgmj deletion completed in 6.323963658s

• [SLOW TEST:43.130 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:48:43.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:48:43.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-kf5f7" to be "success or failure"
Dec 30 12:48:44.011: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.266207ms
Dec 30 12:48:46.171: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236773173s
Dec 30 12:48:48.183: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248681974s
Dec 30 12:48:50.190: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255745818s
Dec 30 12:48:52.252: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.317614187s
Dec 30 12:48:54.263: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.328948993s
STEP: Saw pod success
Dec 30 12:48:54.263: INFO: Pod "downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:48:54.267: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:48:55.639: INFO: Waiting for pod downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:48:55.648: INFO: Pod downwardapi-volume-b6720e9d-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:48:55.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kf5f7" for this suite.
Dec 30 12:49:01.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:49:02.113: INFO: namespace: e2e-tests-downward-api-kf5f7, resource: bindings, ignored listing per whitelist
Dec 30 12:49:02.176: INFO: namespace e2e-tests-downward-api-kf5f7 deletion completed in 6.320435456s

• [SLOW TEST:18.583 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:49:02.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-g2k9x/configmap-test-c175bd2f-2b02-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:49:02.509: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-g2k9x" to be "success or failure"
Dec 30 12:49:02.527: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008003ms
Dec 30 12:49:04.575: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065968523s
Dec 30 12:49:06.596: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086424968s
Dec 30 12:49:08.758: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248444696s
Dec 30 12:49:10.888: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378229472s
Dec 30 12:49:12.950: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.440680736s
STEP: Saw pod success
Dec 30 12:49:12.950: INFO: Pod "pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:49:12.958: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005 container env-test: 
STEP: delete the pod
Dec 30 12:49:13.120: INFO: Waiting for pod pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:49:13.133: INFO: Pod pod-configmaps-c1783002-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:49:13.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-g2k9x" for this suite.
Dec 30 12:49:19.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:49:19.336: INFO: namespace: e2e-tests-configmap-g2k9x, resource: bindings, ignored listing per whitelist
Dec 30 12:49:19.363: INFO: namespace e2e-tests-configmap-g2k9x deletion completed in 6.220759622s

• [SLOW TEST:17.186 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:49:19.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 30 12:49:27.836: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:49:52.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-6vfjb" for this suite.
Dec 30 12:49:58.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:49:58.312: INFO: namespace: e2e-tests-namespaces-6vfjb, resource: bindings, ignored listing per whitelist
Dec 30 12:49:58.583: INFO: namespace e2e-tests-namespaces-6vfjb deletion completed in 6.367974123s
STEP: Destroying namespace "e2e-tests-nsdeletetest-24gtd" for this suite.
Dec 30 12:49:58.593: INFO: Namespace e2e-tests-nsdeletetest-24gtd was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-xq4gd" for this suite.
Dec 30 12:50:04.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:50:04.728: INFO: namespace: e2e-tests-nsdeletetest-xq4gd, resource: bindings, ignored listing per whitelist
Dec 30 12:50:04.802: INFO: namespace e2e-tests-nsdeletetest-xq4gd deletion completed in 6.208419481s

• [SLOW TEST:45.439 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:50:04.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e6c5c8c2-2b02-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 12:50:05.002: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-pkjqh" to be "success or failure"
Dec 30 12:50:05.027: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.525537ms
Dec 30 12:50:07.341: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338885663s
Dec 30 12:50:09.368: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365805681s
Dec 30 12:50:11.782: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780577268s
Dec 30 12:50:13.799: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796786681s
Dec 30 12:50:16.423: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.420814325s
STEP: Saw pod success
Dec 30 12:50:16.423: INFO: Pod "pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:50:16.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 12:50:16.998: INFO: Waiting for pod pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005 to disappear
Dec 30 12:50:17.052: INFO: Pod pod-projected-configmaps-e6c6c7ce-2b02-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:50:17.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pkjqh" for this suite.
Dec 30 12:50:25.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:50:25.283: INFO: namespace: e2e-tests-projected-pkjqh, resource: bindings, ignored listing per whitelist
Dec 30 12:50:25.370: INFO: namespace e2e-tests-projected-pkjqh deletion completed in 8.247451904s

• [SLOW TEST:20.569 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:50:25.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:50:25.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-s2mrk" for this suite.
Dec 30 12:50:31.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:50:31.749: INFO: namespace: e2e-tests-services-s2mrk, resource: bindings, ignored listing per whitelist
Dec 30 12:50:31.766: INFO: namespace e2e-tests-services-s2mrk deletion completed in 6.198821556s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.396 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:50:31.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 30 12:50:31.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:32.230: INFO: stderr: ""
Dec 30 12:50:32.230: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 12:50:32.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:32.438: INFO: stderr: ""
Dec 30 12:50:32.438: INFO: stdout: "update-demo-nautilus-jhtbr update-demo-nautilus-xtj7g "
Dec 30 12:50:32.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhtbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:32.577: INFO: stderr: ""
Dec 30 12:50:32.577: INFO: stdout: ""
Dec 30 12:50:32.577: INFO: update-demo-nautilus-jhtbr is created but not running
Dec 30 12:50:37.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:37.714: INFO: stderr: ""
Dec 30 12:50:37.714: INFO: stdout: "update-demo-nautilus-jhtbr update-demo-nautilus-xtj7g "
Dec 30 12:50:37.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhtbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:37.809: INFO: stderr: ""
Dec 30 12:50:37.809: INFO: stdout: ""
Dec 30 12:50:37.809: INFO: update-demo-nautilus-jhtbr is created but not running
Dec 30 12:50:42.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:42.928: INFO: stderr: ""
Dec 30 12:50:42.928: INFO: stdout: "update-demo-nautilus-jhtbr update-demo-nautilus-xtj7g "
Dec 30 12:50:42.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhtbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:43.032: INFO: stderr: ""
Dec 30 12:50:43.032: INFO: stdout: ""
Dec 30 12:50:43.032: INFO: update-demo-nautilus-jhtbr is created but not running
Dec 30 12:50:48.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:48.185: INFO: stderr: ""
Dec 30 12:50:48.185: INFO: stdout: "update-demo-nautilus-jhtbr update-demo-nautilus-xtj7g "
Dec 30 12:50:48.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhtbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:48.302: INFO: stderr: ""
Dec 30 12:50:48.302: INFO: stdout: "true"
Dec 30 12:50:48.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhtbr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:48.414: INFO: stderr: ""
Dec 30 12:50:48.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 12:50:48.415: INFO: validating pod update-demo-nautilus-jhtbr
Dec 30 12:50:48.473: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 12:50:48.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 12:50:48.473: INFO: update-demo-nautilus-jhtbr is verified up and running
Dec 30 12:50:48.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtj7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:48.627: INFO: stderr: ""
Dec 30 12:50:48.627: INFO: stdout: "true"
Dec 30 12:50:48.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtj7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:50:48.748: INFO: stderr: ""
Dec 30 12:50:48.749: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 12:50:48.749: INFO: validating pod update-demo-nautilus-xtj7g
Dec 30 12:50:48.762: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 12:50:48.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 12:50:48.763: INFO: update-demo-nautilus-xtj7g is verified up and running
STEP: rolling-update to new replication controller
Dec 30 12:50:48.765: INFO: scanned /root for discovery docs: 
Dec 30 12:50:48.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.142: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 30 12:51:24.142: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 12:51:24.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.270: INFO: stderr: ""
Dec 30 12:51:24.270: INFO: stdout: "update-demo-kitten-5bjz8 update-demo-kitten-n9r6l "
Dec 30 12:51:24.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5bjz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.382: INFO: stderr: ""
Dec 30 12:51:24.382: INFO: stdout: "true"
Dec 30 12:51:24.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5bjz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.481: INFO: stderr: ""
Dec 30 12:51:24.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 30 12:51:24.481: INFO: validating pod update-demo-kitten-5bjz8
Dec 30 12:51:24.531: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 30 12:51:24.531: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 30 12:51:24.531: INFO: update-demo-kitten-5bjz8 is verified up and running
Dec 30 12:51:24.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n9r6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.654: INFO: stderr: ""
Dec 30 12:51:24.654: INFO: stdout: "true"
Dec 30 12:51:24.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n9r6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v5qmv'
Dec 30 12:51:24.750: INFO: stderr: ""
Dec 30 12:51:24.750: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 30 12:51:24.750: INFO: validating pod update-demo-kitten-n9r6l
Dec 30 12:51:24.762: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 30 12:51:24.762: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 30 12:51:24.762: INFO: update-demo-kitten-n9r6l is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:51:24.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v5qmv" for this suite.
Dec 30 12:51:51.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:51:51.364: INFO: namespace: e2e-tests-kubectl-v5qmv, resource: bindings, ignored listing per whitelist
Dec 30 12:51:51.382: INFO: namespace e2e-tests-kubectl-v5qmv deletion completed in 26.611641209s

• [SLOW TEST:79.616 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:51:51.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 30 12:51:51.678: INFO: Waiting up to 5m0s for pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-var-expansion-9szx5" to be "success or failure"
Dec 30 12:51:51.698: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.653163ms
Dec 30 12:51:53.716: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038117181s
Dec 30 12:51:55.743: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064484502s
Dec 30 12:51:57.763: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084942403s
Dec 30 12:51:59.781: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102664957s
Dec 30 12:52:02.274: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.595710903s
STEP: Saw pod success
Dec 30 12:52:02.274: INFO: Pod "var-expansion-2653b324-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:52:02.281: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2653b324-2b03-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 12:52:02.524: INFO: Waiting for pod var-expansion-2653b324-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:52:02.558: INFO: Pod var-expansion-2653b324-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:52:02.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-9szx5" for this suite.
Dec 30 12:52:08.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:52:08.718: INFO: namespace: e2e-tests-var-expansion-9szx5, resource: bindings, ignored listing per whitelist
Dec 30 12:52:08.881: INFO: namespace e2e-tests-var-expansion-9szx5 deletion completed in 6.305213705s

• [SLOW TEST:17.499 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:52:08.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-qjh28
Dec 30 12:52:19.124: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-qjh28
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 12:52:19.133: INFO: Initial restart count of pod liveness-exec is 0
Dec 30 12:53:09.954: INFO: Restart count of pod e2e-tests-container-probe-qjh28/liveness-exec is now 1 (50.820433081s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:53:09.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qjh28" for this suite.
Dec 30 12:53:18.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:53:18.253: INFO: namespace: e2e-tests-container-probe-qjh28, resource: bindings, ignored listing per whitelist
Dec 30 12:53:18.324: INFO: namespace e2e-tests-container-probe-qjh28 deletion completed in 8.311705048s

• [SLOW TEST:69.443 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:53:18.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 12:53:18.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-bgmdl" to be "success or failure"
Dec 30 12:53:18.604: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649284ms
Dec 30 12:53:20.644: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046365943s
Dec 30 12:53:22.667: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069069163s
Dec 30 12:53:25.209: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611757204s
Dec 30 12:53:27.254: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656178428s
Dec 30 12:53:29.275: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.676943578s
Dec 30 12:53:31.294: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.6959482s
STEP: Saw pod success
Dec 30 12:53:31.294: INFO: Pod "downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:53:31.300: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 12:53:31.506: INFO: Waiting for pod downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:53:31.511: INFO: Pod downwardapi-volume-5a2a219a-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:53:31.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bgmdl" for this suite.
Dec 30 12:53:37.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:53:37.677: INFO: namespace: e2e-tests-projected-bgmdl, resource: bindings, ignored listing per whitelist
Dec 30 12:53:37.783: INFO: namespace e2e-tests-projected-bgmdl deletion completed in 6.26542676s

• [SLOW TEST:19.458 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:53:37.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-65c4c704-2b03-11ea-8970-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-65c4c6d5-2b03-11ea-8970-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 30 12:53:38.162: INFO: Waiting up to 5m0s for pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-l22bq" to be "success or failure"
Dec 30 12:53:38.181: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.568137ms
Dec 30 12:53:40.498: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336249078s
Dec 30 12:53:42.528: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366311325s
Dec 30 12:53:44.569: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40717767s
Dec 30 12:53:46.667: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505410245s
Dec 30 12:53:48.912: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.750030258s
STEP: Saw pod success
Dec 30 12:53:48.912: INFO: Pod "projected-volume-65c4c645-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:53:48.921: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-65c4c645-2b03-11ea-8970-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 30 12:53:49.007: INFO: Waiting for pod projected-volume-65c4c645-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:53:49.063: INFO: Pod projected-volume-65c4c645-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:53:49.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l22bq" for this suite.
Dec 30 12:53:55.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:53:55.217: INFO: namespace: e2e-tests-projected-l22bq, resource: bindings, ignored listing per whitelist
Dec 30 12:53:55.340: INFO: namespace e2e-tests-projected-l22bq deletion completed in 6.267543083s

• [SLOW TEST:17.557 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:53:55.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 30 12:53:55.723: INFO: Waiting up to 5m0s for pod "pod-704b1222-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-wnxjf" to be "success or failure"
Dec 30 12:53:55.869: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 145.782225ms
Dec 30 12:53:57.884: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1609343s
Dec 30 12:53:59.903: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179928963s
Dec 30 12:54:02.075: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351546801s
Dec 30 12:54:04.114: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39047475s
Dec 30 12:54:06.129: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.405533364s
STEP: Saw pod success
Dec 30 12:54:06.129: INFO: Pod "pod-704b1222-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:54:06.137: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-704b1222-2b03-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:54:06.328: INFO: Waiting for pod pod-704b1222-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:54:06.340: INFO: Pod pod-704b1222-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:54:06.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wnxjf" for this suite.
Dec 30 12:54:14.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:54:14.685: INFO: namespace: e2e-tests-emptydir-wnxjf, resource: bindings, ignored listing per whitelist
Dec 30 12:54:14.701: INFO: namespace e2e-tests-emptydir-wnxjf deletion completed in 8.354050571s

• [SLOW TEST:19.360 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:54:14.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 30 12:54:15.033: INFO: Waiting up to 5m0s for pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-var-expansion-bfd64" to be "success or failure"
Dec 30 12:54:15.041: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153626ms
Dec 30 12:54:17.599: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566193545s
Dec 30 12:54:19.620: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586366976s
Dec 30 12:54:22.154: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120531554s
Dec 30 12:54:24.163: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.12930081s
Dec 30 12:54:26.172: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.139012391s
STEP: Saw pod success
Dec 30 12:54:26.172: INFO: Pod "var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:54:26.176: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 30 12:54:26.934: INFO: Waiting for pod var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:54:27.128: INFO: Pod var-expansion-7bb7d626-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:54:27.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bfd64" for this suite.
Dec 30 12:54:35.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:54:35.585: INFO: namespace: e2e-tests-var-expansion-bfd64, resource: bindings, ignored listing per whitelist
Dec 30 12:54:35.615: INFO: namespace e2e-tests-var-expansion-bfd64 deletion completed in 8.462014369s

• [SLOW TEST:20.913 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:54:35.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-8876f226-2b03-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:54:36.418: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-6r942" to be "success or failure"
Dec 30 12:54:36.433: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.383525ms
Dec 30 12:54:38.972: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553597994s
Dec 30 12:54:40.997: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578337647s
Dec 30 12:54:43.017: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599142123s
Dec 30 12:54:48.168: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.749492254s
Dec 30 12:54:50.179: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.761237754s
Dec 30 12:54:53.912: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.493540116s
Dec 30 12:54:55.928: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.509705899s
Dec 30 12:54:58.024: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.605621345s
Dec 30 12:55:00.108: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.690162645s
STEP: Saw pod success
Dec 30 12:55:00.108: INFO: Pod "pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:55:00.118: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:55:00.487: INFO: Waiting for pod pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:55:00.536: INFO: Pod pod-projected-secrets-888b534d-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:55:00.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6r942" for this suite.
Dec 30 12:55:08.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:55:08.808: INFO: namespace: e2e-tests-projected-6r942, resource: bindings, ignored listing per whitelist
Dec 30 12:55:08.865: INFO: namespace e2e-tests-projected-6r942 deletion completed in 8.304320155s

• [SLOW TEST:33.250 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:55:08.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 30 12:55:09.150: INFO: namespace e2e-tests-kubectl-fszv5
Dec 30 12:55:09.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fszv5'
Dec 30 12:55:11.500: INFO: stderr: ""
Dec 30 12:55:11.500: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 30 12:55:12.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:12.562: INFO: Found 0 / 1
Dec 30 12:55:13.574: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:13.574: INFO: Found 0 / 1
Dec 30 12:55:15.024: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:15.025: INFO: Found 0 / 1
Dec 30 12:55:15.555: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:15.555: INFO: Found 0 / 1
Dec 30 12:55:16.559: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:16.559: INFO: Found 0 / 1
Dec 30 12:55:17.555: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:17.555: INFO: Found 0 / 1
Dec 30 12:55:18.539: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:18.539: INFO: Found 0 / 1
Dec 30 12:55:19.994: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:19.994: INFO: Found 0 / 1
Dec 30 12:55:21.221: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:21.221: INFO: Found 0 / 1
Dec 30 12:55:21.509: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:21.509: INFO: Found 0 / 1
Dec 30 12:55:22.870: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:22.870: INFO: Found 0 / 1
Dec 30 12:55:23.513: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:23.513: INFO: Found 0 / 1
Dec 30 12:55:24.522: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:24.522: INFO: Found 0 / 1
Dec 30 12:55:25.522: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:25.522: INFO: Found 1 / 1
Dec 30 12:55:25.522: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 30 12:55:25.536: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 12:55:25.536: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 30 12:55:25.536: INFO: wait on redis-master startup in e2e-tests-kubectl-fszv5 
Dec 30 12:55:25.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p47gj redis-master --namespace=e2e-tests-kubectl-fszv5'
Dec 30 12:55:25.697: INFO: stderr: ""
Dec 30 12:55:25.697: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Dec 12:55:23.832 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 12:55:23.833 # Server started, Redis version 3.2.12\n1:M 30 Dec 12:55:23.833 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 12:55:23.833 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 30 12:55:25.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-fszv5'
Dec 30 12:55:25.922: INFO: stderr: ""
Dec 30 12:55:25.922: INFO: stdout: "service/rm2 exposed\n"
Dec 30 12:55:26.004: INFO: Service rm2 in namespace e2e-tests-kubectl-fszv5 found.
STEP: exposing service
Dec 30 12:55:28.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-fszv5'
Dec 30 12:55:28.263: INFO: stderr: ""
Dec 30 12:55:28.263: INFO: stdout: "service/rm3 exposed\n"
Dec 30 12:55:28.287: INFO: Service rm3 in namespace e2e-tests-kubectl-fszv5 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:55:30.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fszv5" for this suite.
Dec 30 12:55:56.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:55:56.659: INFO: namespace: e2e-tests-kubectl-fszv5, resource: bindings, ignored listing per whitelist
Dec 30 12:55:56.675: INFO: namespace e2e-tests-kubectl-fszv5 deletion completed in 26.343711138s

• [SLOW TEST:47.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:55:56.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b886f0d4-2b03-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:55:56.922: INFO: Waiting up to 5m0s for pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-vlmns" to be "success or failure"
Dec 30 12:55:56.940: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.945707ms
Dec 30 12:55:58.955: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032521428s
Dec 30 12:56:00.983: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060999773s
Dec 30 12:56:03.336: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414225789s
Dec 30 12:56:05.488: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566123094s
Dec 30 12:56:07.515: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.592558095s
STEP: Saw pod success
Dec 30 12:56:07.515: INFO: Pod "pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:56:07.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:56:07.776: INFO: Waiting for pod pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:56:07.954: INFO: Pod pod-secrets-b888bc45-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:56:07.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vlmns" for this suite.
Dec 30 12:56:14.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:56:14.092: INFO: namespace: e2e-tests-secrets-vlmns, resource: bindings, ignored listing per whitelist
Dec 30 12:56:14.180: INFO: namespace e2e-tests-secrets-vlmns deletion completed in 6.215332651s

• [SLOW TEST:17.504 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:56:14.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 30 12:56:28.011: INFO: Successfully updated pod "annotationupdatec2f2b74d-2b03-11ea-8970-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:56:30.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6rtsz" for this suite.
Dec 30 12:56:54.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:56:54.528: INFO: namespace: e2e-tests-downward-api-6rtsz, resource: bindings, ignored listing per whitelist
Dec 30 12:56:54.765: INFO: namespace e2e-tests-downward-api-6rtsz deletion completed in 24.663265488s

• [SLOW TEST:40.585 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:56:54.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-st77
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 12:56:56.000: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-st77" in namespace "e2e-tests-subpath-fbt84" to be "success or failure"
Dec 30 12:56:56.051: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 50.883038ms
Dec 30 12:56:58.405: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404434352s
Dec 30 12:57:00.424: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423836574s
Dec 30 12:57:02.457: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456956142s
Dec 30 12:57:04.792: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.791521642s
Dec 30 12:57:06.848: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 10.84787821s
Dec 30 12:57:09.060: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 13.059288558s
Dec 30 12:57:11.080: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 15.079597093s
Dec 30 12:57:13.103: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 17.102830787s
Dec 30 12:57:15.131: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 19.13103365s
Dec 30 12:57:17.151: INFO: Pod "pod-subpath-test-secret-st77": Phase="Pending", Reason="", readiness=false. Elapsed: 21.150398906s
Dec 30 12:57:19.167: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 23.166589181s
Dec 30 12:57:21.182: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 25.181132228s
Dec 30 12:57:23.200: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 27.199509798s
Dec 30 12:57:25.215: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 29.214632921s
Dec 30 12:57:27.235: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 31.234853583s
Dec 30 12:57:29.253: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 33.252698592s
Dec 30 12:57:31.275: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 35.274114547s
Dec 30 12:57:33.288: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 37.287991124s
Dec 30 12:57:35.303: INFO: Pod "pod-subpath-test-secret-st77": Phase="Running", Reason="", readiness=false. Elapsed: 39.302212685s
Dec 30 12:57:37.672: INFO: Pod "pod-subpath-test-secret-st77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.671249096s
STEP: Saw pod success
Dec 30 12:57:37.672: INFO: Pod "pod-subpath-test-secret-st77" satisfied condition "success or failure"
Dec 30 12:57:37.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-st77 container test-container-subpath-secret-st77: 
STEP: delete the pod
Dec 30 12:57:38.497: INFO: Waiting for pod pod-subpath-test-secret-st77 to disappear
Dec 30 12:57:38.530: INFO: Pod pod-subpath-test-secret-st77 no longer exists
STEP: Deleting pod pod-subpath-test-secret-st77
Dec 30 12:57:38.531: INFO: Deleting pod "pod-subpath-test-secret-st77" in namespace "e2e-tests-subpath-fbt84"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:57:38.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fbt84" for this suite.
Dec 30 12:57:46.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:57:47.376: INFO: namespace: e2e-tests-subpath-fbt84, resource: bindings, ignored listing per whitelist
Dec 30 12:57:47.382: INFO: namespace e2e-tests-subpath-fbt84 deletion completed in 8.825880153s

• [SLOW TEST:52.616 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:57:47.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 30 12:57:47.708: INFO: Waiting up to 5m0s for pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005" in namespace "e2e-tests-emptydir-qzz69" to be "success or failure"
Dec 30 12:57:47.800: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 92.063258ms
Dec 30 12:57:50.112: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403988764s
Dec 30 12:57:52.151: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44302007s
Dec 30 12:57:54.164: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455601239s
Dec 30 12:57:56.306: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.597695939s
Dec 30 12:57:58.323: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61482127s
Dec 30 12:58:00.350: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.641962086s
STEP: Saw pod success
Dec 30 12:58:00.350: INFO: Pod "pod-fa8eb68e-2b03-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:58:00.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fa8eb68e-2b03-11ea-8970-0242ac110005 container test-container: 
STEP: delete the pod
Dec 30 12:58:00.840: INFO: Waiting for pod pod-fa8eb68e-2b03-11ea-8970-0242ac110005 to disappear
Dec 30 12:58:00.866: INFO: Pod pod-fa8eb68e-2b03-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:58:00.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qzz69" for this suite.
Dec 30 12:58:08.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:58:09.066: INFO: namespace: e2e-tests-emptydir-qzz69, resource: bindings, ignored listing per whitelist
Dec 30 12:58:09.113: INFO: namespace e2e-tests-emptydir-qzz69 deletion completed in 8.165774068s

• [SLOW TEST:21.731 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:58:09.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-077baca2-2b04-11ea-8970-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 30 12:58:09.448: INFO: Waiting up to 5m0s for pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005" in namespace "e2e-tests-secrets-5ffkx" to be "success or failure"
Dec 30 12:58:09.458: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.985261ms
Dec 30 12:58:11.496: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047979884s
Dec 30 12:58:13.519: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070348551s
Dec 30 12:58:15.529: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080802046s
Dec 30 12:58:17.574: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125787645s
Dec 30 12:58:19.582: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134121969s
Dec 30 12:58:21.600: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.151709167s
Dec 30 12:58:23.613: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.164600947s
STEP: Saw pod success
Dec 30 12:58:23.613: INFO: Pod "pod-secrets-077ed900-2b04-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 12:58:23.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-077ed900-2b04-11ea-8970-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 30 12:58:24.544: INFO: Waiting for pod pod-secrets-077ed900-2b04-11ea-8970-0242ac110005 to disappear
Dec 30 12:58:24.907: INFO: Pod pod-secrets-077ed900-2b04-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 12:58:24.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5ffkx" for this suite.
Dec 30 12:58:33.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 12:58:34.329: INFO: namespace: e2e-tests-secrets-5ffkx, resource: bindings, ignored listing per whitelist
Dec 30 12:58:34.465: INFO: namespace e2e-tests-secrets-5ffkx deletion completed in 9.546365697s

• [SLOW TEST:25.352 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 12:58:34.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 30 12:58:37.459: INFO: Pod name wrapped-volume-race-18342a65-2b04-11ea-8970-0242ac110005: Found 0 pods out of 5
Dec 30 12:58:42.489: INFO: Pod name wrapped-volume-race-18342a65-2b04-11ea-8970-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-18342a65-2b04-11ea-8970-0242ac110005 in namespace e2e-tests-emptydir-wrapper-86rd9, will wait for the garbage collector to delete the pods
Dec 30 13:00:56.811: INFO: Deleting ReplicationController wrapped-volume-race-18342a65-2b04-11ea-8970-0242ac110005 took: 17.880635ms
Dec 30 13:00:57.312: INFO: Terminating ReplicationController wrapped-volume-race-18342a65-2b04-11ea-8970-0242ac110005 pods took: 500.574104ms
STEP: Creating RC which spawns configmap-volume pods
Dec 30 13:01:43.268: INFO: Pod name wrapped-volume-race-86f290bf-2b04-11ea-8970-0242ac110005: Found 0 pods out of 5
Dec 30 13:01:48.289: INFO: Pod name wrapped-volume-race-86f290bf-2b04-11ea-8970-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-86f290bf-2b04-11ea-8970-0242ac110005 in namespace e2e-tests-emptydir-wrapper-86rd9, will wait for the garbage collector to delete the pods
Dec 30 13:03:30.499: INFO: Deleting ReplicationController wrapped-volume-race-86f290bf-2b04-11ea-8970-0242ac110005 took: 58.651355ms
Dec 30 13:03:30.899: INFO: Terminating ReplicationController wrapped-volume-race-86f290bf-2b04-11ea-8970-0242ac110005 pods took: 400.592722ms
STEP: Creating RC which spawns configmap-volume pods
Dec 30 13:04:23.403: INFO: Pod name wrapped-volume-race-e65e062e-2b04-11ea-8970-0242ac110005: Found 0 pods out of 5
Dec 30 13:04:28.436: INFO: Pod name wrapped-volume-race-e65e062e-2b04-11ea-8970-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e65e062e-2b04-11ea-8970-0242ac110005 in namespace e2e-tests-emptydir-wrapper-86rd9, will wait for the garbage collector to delete the pods
Dec 30 13:06:16.932: INFO: Deleting ReplicationController wrapped-volume-race-e65e062e-2b04-11ea-8970-0242ac110005 took: 205.440884ms
Dec 30 13:06:17.733: INFO: Terminating ReplicationController wrapped-volume-race-e65e062e-2b04-11ea-8970-0242ac110005 pods took: 800.479762ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:07:14.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-86rd9" for this suite.
Dec 30 13:07:26.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:07:26.576: INFO: namespace: e2e-tests-emptydir-wrapper-86rd9, resource: bindings, ignored listing per whitelist
Dec 30 13:07:26.593: INFO: namespace e2e-tests-emptydir-wrapper-86rd9 deletion completed in 12.321834999s

• [SLOW TEST:532.127 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:07:26.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 13:07:26.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005" in namespace "e2e-tests-downward-api-n2zpz" to be "success or failure"
Dec 30 13:07:26.864: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.526748ms
Dec 30 13:07:28.971: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164844485s
Dec 30 13:07:31.930: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124093771s
Dec 30 13:07:33.957: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.151513178s
Dec 30 13:07:36.309: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.503069179s
Dec 30 13:07:38.322: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.516073727s
Dec 30 13:07:40.344: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.537847376s
STEP: Saw pod success
Dec 30 13:07:40.344: INFO: Pod "downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 13:07:40.354: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 13:07:41.562: INFO: Waiting for pod downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005 to disappear
Dec 30 13:07:41.718: INFO: Pod downwardapi-volume-53b37a85-2b05-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:07:41.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n2zpz" for this suite.
Dec 30 13:07:47.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:07:48.096: INFO: namespace: e2e-tests-downward-api-n2zpz, resource: bindings, ignored listing per whitelist
Dec 30 13:07:48.171: INFO: namespace e2e-tests-downward-api-n2zpz deletion completed in 6.431957527s

• [SLOW TEST:21.577 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:07:48.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:07:48.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qkf2m" for this suite.
Dec 30 13:08:12.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:08:12.982: INFO: namespace: e2e-tests-pods-qkf2m, resource: bindings, ignored listing per whitelist
Dec 30 13:08:13.060: INFO: namespace e2e-tests-pods-qkf2m deletion completed in 24.232962464s

• [SLOW TEST:24.889 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:08:13.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-nztk2
I1230 13:08:13.377069       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-nztk2, replica count: 1
I1230 13:08:14.427871       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:15.428118       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:16.428379       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:17.428822       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:18.429255       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:19.429575       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:20.429969       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:21.430289       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 13:08:22.430727       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 30 13:08:22.677: INFO: Created: latency-svc-z25zh
Dec 30 13:08:22.710: INFO: Got endpoints: latency-svc-z25zh [179.015471ms]
Dec 30 13:08:22.872: INFO: Created: latency-svc-6rz8x
Dec 30 13:08:23.109: INFO: Got endpoints: latency-svc-6rz8x [398.316863ms]
Dec 30 13:08:23.126: INFO: Created: latency-svc-7md77
Dec 30 13:08:23.133: INFO: Got endpoints: latency-svc-7md77 [421.843948ms]
Dec 30 13:08:23.189: INFO: Created: latency-svc-snwpc
Dec 30 13:08:23.335: INFO: Got endpoints: latency-svc-snwpc [624.241688ms]
Dec 30 13:08:23.358: INFO: Created: latency-svc-g5z6x
Dec 30 13:08:23.390: INFO: Got endpoints: latency-svc-g5z6x [679.051233ms]
Dec 30 13:08:23.550: INFO: Created: latency-svc-lnzd7
Dec 30 13:08:23.653: INFO: Got endpoints: latency-svc-lnzd7 [941.450932ms]
Dec 30 13:08:23.815: INFO: Created: latency-svc-bclv8
Dec 30 13:08:23.826: INFO: Got endpoints: latency-svc-bclv8 [1.11520358s]
Dec 30 13:08:23.980: INFO: Created: latency-svc-8hmjd
Dec 30 13:08:23.996: INFO: Got endpoints: latency-svc-8hmjd [1.28452185s]
Dec 30 13:08:24.219: INFO: Created: latency-svc-mn4f6
Dec 30 13:08:24.253: INFO: Got endpoints: latency-svc-mn4f6 [1.541203158s]
Dec 30 13:08:24.322: INFO: Created: latency-svc-r5v7w
Dec 30 13:08:24.451: INFO: Created: latency-svc-n8n9c
Dec 30 13:08:24.460: INFO: Got endpoints: latency-svc-r5v7w [1.749249119s]
Dec 30 13:08:24.460: INFO: Got endpoints: latency-svc-n8n9c [1.74903439s]
Dec 30 13:08:24.627: INFO: Created: latency-svc-x2btv
Dec 30 13:08:24.643: INFO: Got endpoints: latency-svc-x2btv [1.931232039s]
Dec 30 13:08:24.691: INFO: Created: latency-svc-d2f67
Dec 30 13:08:24.800: INFO: Got endpoints: latency-svc-d2f67 [2.088765443s]
Dec 30 13:08:24.814: INFO: Created: latency-svc-trb49
Dec 30 13:08:24.834: INFO: Got endpoints: latency-svc-trb49 [2.123819941s]
Dec 30 13:08:24.889: INFO: Created: latency-svc-fx8d6
Dec 30 13:08:25.029: INFO: Got endpoints: latency-svc-fx8d6 [228.122106ms]
Dec 30 13:08:25.112: INFO: Created: latency-svc-4vpf5
Dec 30 13:08:25.117: INFO: Got endpoints: latency-svc-4vpf5 [2.40615683s]
Dec 30 13:08:25.264: INFO: Created: latency-svc-6wwmn
Dec 30 13:08:25.280: INFO: Got endpoints: latency-svc-6wwmn [2.569334705s]
Dec 30 13:08:25.329: INFO: Created: latency-svc-894s9
Dec 30 13:08:25.464: INFO: Got endpoints: latency-svc-894s9 [2.354029899s]
Dec 30 13:08:25.556: INFO: Created: latency-svc-wfzwl
Dec 30 13:08:25.585: INFO: Created: latency-svc-km7b9
Dec 30 13:08:25.722: INFO: Got endpoints: latency-svc-wfzwl [2.589316169s]
Dec 30 13:08:25.732: INFO: Got endpoints: latency-svc-km7b9 [2.396985562s]
Dec 30 13:08:25.846: INFO: Created: latency-svc-s5h7f
Dec 30 13:08:25.984: INFO: Got endpoints: latency-svc-s5h7f [2.593988632s]
Dec 30 13:08:26.015: INFO: Created: latency-svc-wt6h5
Dec 30 13:08:26.040: INFO: Got endpoints: latency-svc-wt6h5 [2.386715955s]
Dec 30 13:08:26.174: INFO: Created: latency-svc-47bhc
Dec 30 13:08:26.195: INFO: Got endpoints: latency-svc-47bhc [2.368234283s]
Dec 30 13:08:26.959: INFO: Created: latency-svc-rkjnr
Dec 30 13:08:27.003: INFO: Got endpoints: latency-svc-rkjnr [3.006678406s]
Dec 30 13:08:27.247: INFO: Created: latency-svc-zc6cw
Dec 30 13:08:27.276: INFO: Got endpoints: latency-svc-zc6cw [3.02340895s]
Dec 30 13:08:27.458: INFO: Created: latency-svc-gr5z9
Dec 30 13:08:27.508: INFO: Got endpoints: latency-svc-gr5z9 [3.047274793s]
Dec 30 13:08:27.694: INFO: Created: latency-svc-jg4jw
Dec 30 13:08:27.875: INFO: Got endpoints: latency-svc-jg4jw [3.414293539s]
Dec 30 13:08:27.932: INFO: Created: latency-svc-zb4xd
Dec 30 13:08:28.035: INFO: Got endpoints: latency-svc-zb4xd [3.392848848s]
Dec 30 13:08:28.271: INFO: Created: latency-svc-2w5r4
Dec 30 13:08:28.284: INFO: Got endpoints: latency-svc-2w5r4 [3.449326733s]
Dec 30 13:08:28.633: INFO: Created: latency-svc-ft7s6
Dec 30 13:08:28.635: INFO: Got endpoints: latency-svc-ft7s6 [3.606649118s]
Dec 30 13:08:28.777: INFO: Created: latency-svc-h49ww
Dec 30 13:08:28.808: INFO: Got endpoints: latency-svc-h49ww [3.690945176s]
Dec 30 13:08:29.052: INFO: Created: latency-svc-lr6s5
Dec 30 13:08:29.078: INFO: Got endpoints: latency-svc-lr6s5 [3.798617747s]
Dec 30 13:08:29.261: INFO: Created: latency-svc-chrkv
Dec 30 13:08:29.350: INFO: Created: latency-svc-rqg6w
Dec 30 13:08:29.350: INFO: Got endpoints: latency-svc-chrkv [3.886083314s]
Dec 30 13:08:29.621: INFO: Got endpoints: latency-svc-rqg6w [3.888619006s]
Dec 30 13:08:29.652: INFO: Created: latency-svc-dqx9r
Dec 30 13:08:29.683: INFO: Got endpoints: latency-svc-dqx9r [3.960312428s]
Dec 30 13:08:29.938: INFO: Created: latency-svc-25588
Dec 30 13:08:29.953: INFO: Got endpoints: latency-svc-25588 [3.968430284s]
Dec 30 13:08:30.007: INFO: Created: latency-svc-pls5f
Dec 30 13:08:30.254: INFO: Got endpoints: latency-svc-pls5f [4.214680446s]
Dec 30 13:08:30.282: INFO: Created: latency-svc-cdh7h
Dec 30 13:08:30.310: INFO: Got endpoints: latency-svc-cdh7h [4.11447857s]
Dec 30 13:08:30.348: INFO: Created: latency-svc-2bjjn
Dec 30 13:08:30.578: INFO: Got endpoints: latency-svc-2bjjn [3.574911526s]
Dec 30 13:08:30.643: INFO: Created: latency-svc-l8wrb
Dec 30 13:08:30.882: INFO: Got endpoints: latency-svc-l8wrb [3.605626634s]
Dec 30 13:08:30.941: INFO: Created: latency-svc-p5ncx
Dec 30 13:08:30.946: INFO: Got endpoints: latency-svc-p5ncx [3.437946837s]
Dec 30 13:08:31.171: INFO: Created: latency-svc-cwds2
Dec 30 13:08:31.184: INFO: Got endpoints: latency-svc-cwds2 [3.308842809s]
Dec 30 13:08:31.468: INFO: Created: latency-svc-njdnh
Dec 30 13:08:31.668: INFO: Got endpoints: latency-svc-njdnh [3.632353302s]
Dec 30 13:08:31.681: INFO: Created: latency-svc-gsjfd
Dec 30 13:08:31.778: INFO: Got endpoints: latency-svc-gsjfd [3.494252589s]
Dec 30 13:08:31.813: INFO: Created: latency-svc-k5pjm
Dec 30 13:08:31.967: INFO: Got endpoints: latency-svc-k5pjm [3.331197404s]
Dec 30 13:08:32.035: INFO: Created: latency-svc-4q59g
Dec 30 13:08:32.064: INFO: Got endpoints: latency-svc-4q59g [3.256206919s]
Dec 30 13:08:32.220: INFO: Created: latency-svc-tcd6n
Dec 30 13:08:32.259: INFO: Got endpoints: latency-svc-tcd6n [3.180407205s]
Dec 30 13:08:32.570: INFO: Created: latency-svc-g74k4
Dec 30 13:08:32.710: INFO: Got endpoints: latency-svc-g74k4 [3.359621333s]
Dec 30 13:08:32.748: INFO: Created: latency-svc-8vwht
Dec 30 13:08:33.068: INFO: Got endpoints: latency-svc-8vwht [3.44686454s]
Dec 30 13:08:33.211: INFO: Created: latency-svc-2798b
Dec 30 13:08:33.365: INFO: Created: latency-svc-dtvqc
Dec 30 13:08:33.630: INFO: Got endpoints: latency-svc-dtvqc [3.677223385s]
Dec 30 13:08:33.630: INFO: Got endpoints: latency-svc-2798b [3.947416233s]
Dec 30 13:08:33.989: INFO: Created: latency-svc-t46sq
Dec 30 13:08:34.319: INFO: Got endpoints: latency-svc-t46sq [4.063662801s]
Dec 30 13:08:34.534: INFO: Created: latency-svc-wxs7m
Dec 30 13:08:34.601: INFO: Got endpoints: latency-svc-wxs7m [4.291188667s]
Dec 30 13:08:34.791: INFO: Created: latency-svc-dlv4c
Dec 30 13:08:34.885: INFO: Got endpoints: latency-svc-dlv4c [4.307194649s]
Dec 30 13:08:34.913: INFO: Created: latency-svc-c7tpb
Dec 30 13:08:34.935: INFO: Got endpoints: latency-svc-c7tpb [4.052867537s]
Dec 30 13:08:35.111: INFO: Created: latency-svc-8c2wm
Dec 30 13:08:35.121: INFO: Got endpoints: latency-svc-8c2wm [4.174508313s]
Dec 30 13:08:35.343: INFO: Created: latency-svc-5g2n2
Dec 30 13:08:35.368: INFO: Got endpoints: latency-svc-5g2n2 [4.183783599s]
Dec 30 13:08:35.439: INFO: Created: latency-svc-xzf6m
Dec 30 13:08:35.611: INFO: Got endpoints: latency-svc-xzf6m [3.942477967s]
Dec 30 13:08:35.626: INFO: Created: latency-svc-zb2dn
Dec 30 13:08:35.646: INFO: Got endpoints: latency-svc-zb2dn [3.868208509s]
Dec 30 13:08:35.867: INFO: Created: latency-svc-g2wrt
Dec 30 13:08:35.890: INFO: Got endpoints: latency-svc-g2wrt [3.922852753s]
Dec 30 13:08:36.163: INFO: Created: latency-svc-zfwds
Dec 30 13:08:36.192: INFO: Got endpoints: latency-svc-zfwds [4.127824396s]
Dec 30 13:08:36.486: INFO: Created: latency-svc-bkxph
Dec 30 13:08:36.555: INFO: Got endpoints: latency-svc-bkxph [4.296050091s]
Dec 30 13:08:36.886: INFO: Created: latency-svc-bmkth
Dec 30 13:08:37.160: INFO: Got endpoints: latency-svc-bmkth [4.450413452s]
Dec 30 13:08:37.179: INFO: Created: latency-svc-kmlj4
Dec 30 13:08:37.536: INFO: Got endpoints: latency-svc-kmlj4 [4.467971877s]
Dec 30 13:08:37.540: INFO: Created: latency-svc-frqhs
Dec 30 13:08:37.600: INFO: Got endpoints: latency-svc-frqhs [3.96973128s]
Dec 30 13:08:37.799: INFO: Created: latency-svc-cq4nk
Dec 30 13:08:37.884: INFO: Got endpoints: latency-svc-cq4nk [4.254080358s]
Dec 30 13:08:37.905: INFO: Created: latency-svc-plg62
Dec 30 13:08:38.027: INFO: Got endpoints: latency-svc-plg62 [3.707962545s]
Dec 30 13:08:38.036: INFO: Created: latency-svc-8h4p9
Dec 30 13:08:38.059: INFO: Got endpoints: latency-svc-8h4p9 [3.458155393s]
Dec 30 13:08:38.268: INFO: Created: latency-svc-4ltrt
Dec 30 13:08:38.333: INFO: Got endpoints: latency-svc-4ltrt [3.447220358s]
Dec 30 13:08:38.539: INFO: Created: latency-svc-kx8cw
Dec 30 13:08:38.582: INFO: Got endpoints: latency-svc-kx8cw [3.646704852s]
Dec 30 13:08:38.625: INFO: Created: latency-svc-n6hzz
Dec 30 13:08:38.806: INFO: Got endpoints: latency-svc-n6hzz [3.684794865s]
Dec 30 13:08:38.871: INFO: Created: latency-svc-dmjlm
Dec 30 13:08:38.889: INFO: Got endpoints: latency-svc-dmjlm [3.521481684s]
Dec 30 13:08:39.168: INFO: Created: latency-svc-lwbkb
Dec 30 13:08:39.189: INFO: Got endpoints: latency-svc-lwbkb [3.578232828s]
Dec 30 13:08:39.343: INFO: Created: latency-svc-vqhng
Dec 30 13:08:39.384: INFO: Got endpoints: latency-svc-vqhng [3.736937192s]
Dec 30 13:08:39.446: INFO: Created: latency-svc-qtl69
Dec 30 13:08:39.564: INFO: Got endpoints: latency-svc-qtl69 [3.673580013s]
Dec 30 13:08:39.587: INFO: Created: latency-svc-qm92w
Dec 30 13:08:39.638: INFO: Got endpoints: latency-svc-qm92w [3.44540558s]
Dec 30 13:08:39.869: INFO: Created: latency-svc-8srd6
Dec 30 13:08:40.111: INFO: Got endpoints: latency-svc-8srd6 [3.555577849s]
Dec 30 13:08:40.216: INFO: Created: latency-svc-7x9bs
Dec 30 13:08:40.410: INFO: Got endpoints: latency-svc-7x9bs [3.249612085s]
Dec 30 13:08:40.458: INFO: Created: latency-svc-zqcwf
Dec 30 13:08:40.675: INFO: Got endpoints: latency-svc-zqcwf [3.138084228s]
Dec 30 13:08:40.764: INFO: Created: latency-svc-znhrb
Dec 30 13:08:40.899: INFO: Got endpoints: latency-svc-znhrb [3.2985028s]
Dec 30 13:08:40.930: INFO: Created: latency-svc-q6bvx
Dec 30 13:08:40.979: INFO: Got endpoints: latency-svc-q6bvx [3.094617579s]
Dec 30 13:08:41.237: INFO: Created: latency-svc-rtjb4
Dec 30 13:08:41.250: INFO: Got endpoints: latency-svc-rtjb4 [3.222901441s]
Dec 30 13:08:41.457: INFO: Created: latency-svc-8kjtf
Dec 30 13:08:41.472: INFO: Got endpoints: latency-svc-8kjtf [3.412461067s]
Dec 30 13:08:41.650: INFO: Created: latency-svc-zghg7
Dec 30 13:08:41.734: INFO: Created: latency-svc-smmc4
Dec 30 13:08:41.920: INFO: Got endpoints: latency-svc-zghg7 [3.587322221s]
Dec 30 13:08:41.930: INFO: Got endpoints: latency-svc-smmc4 [3.347946106s]
Dec 30 13:08:41.966: INFO: Created: latency-svc-8shll
Dec 30 13:08:41.976: INFO: Got endpoints: latency-svc-8shll [3.170009208s]
Dec 30 13:08:42.210: INFO: Created: latency-svc-sqvtj
Dec 30 13:08:42.449: INFO: Got endpoints: latency-svc-sqvtj [3.559484407s]
Dec 30 13:08:42.466: INFO: Created: latency-svc-mgc78
Dec 30 13:08:42.555: INFO: Got endpoints: latency-svc-mgc78 [3.365454795s]
Dec 30 13:08:42.704: INFO: Created: latency-svc-pbkcj
Dec 30 13:08:42.750: INFO: Got endpoints: latency-svc-pbkcj [3.366061924s]
Dec 30 13:08:42.926: INFO: Created: latency-svc-6tqwg
Dec 30 13:08:42.952: INFO: Got endpoints: latency-svc-6tqwg [3.38796536s]
Dec 30 13:08:43.103: INFO: Created: latency-svc-2chg8
Dec 30 13:08:43.155: INFO: Got endpoints: latency-svc-2chg8 [3.516163189s]
Dec 30 13:08:43.376: INFO: Created: latency-svc-sfgdh
Dec 30 13:08:43.448: INFO: Created: latency-svc-nsl9m
Dec 30 13:08:43.558: INFO: Got endpoints: latency-svc-sfgdh [3.447177845s]
Dec 30 13:08:43.619: INFO: Created: latency-svc-nrk69
Dec 30 13:08:43.622: INFO: Got endpoints: latency-svc-nsl9m [3.21149066s]
Dec 30 13:08:43.767: INFO: Got endpoints: latency-svc-nrk69 [3.092041975s]
Dec 30 13:08:43.798: INFO: Created: latency-svc-429lv
Dec 30 13:08:43.810: INFO: Got endpoints: latency-svc-429lv [2.910948828s]
Dec 30 13:08:43.863: INFO: Created: latency-svc-79b5q
Dec 30 13:08:44.054: INFO: Got endpoints: latency-svc-79b5q [3.074262666s]
Dec 30 13:08:44.109: INFO: Created: latency-svc-sm5bs
Dec 30 13:08:44.261: INFO: Got endpoints: latency-svc-sm5bs [3.011480395s]
Dec 30 13:08:44.375: INFO: Created: latency-svc-dfb45
Dec 30 13:08:44.564: INFO: Created: latency-svc-pn4vh
Dec 30 13:08:44.633: INFO: Got endpoints: latency-svc-dfb45 [3.160623127s]
Dec 30 13:08:44.828: INFO: Got endpoints: latency-svc-pn4vh [2.907289746s]
Dec 30 13:08:44.866: INFO: Created: latency-svc-6pp75
Dec 30 13:08:44.874: INFO: Got endpoints: latency-svc-6pp75 [2.944189157s]
Dec 30 13:08:45.068: INFO: Created: latency-svc-45frb
Dec 30 13:08:45.082: INFO: Got endpoints: latency-svc-45frb [3.106590257s]
Dec 30 13:08:45.166: INFO: Created: latency-svc-gg24q
Dec 30 13:08:45.298: INFO: Got endpoints: latency-svc-gg24q [2.848939487s]
Dec 30 13:08:45.325: INFO: Created: latency-svc-rfl57
Dec 30 13:08:45.347: INFO: Got endpoints: latency-svc-rfl57 [2.79146142s]
Dec 30 13:08:45.352: INFO: Created: latency-svc-4bhhq
Dec 30 13:08:45.358: INFO: Got endpoints: latency-svc-4bhhq [2.60821625s]
Dec 30 13:08:45.504: INFO: Created: latency-svc-vr68v
Dec 30 13:08:45.523: INFO: Got endpoints: latency-svc-vr68v [2.570955699s]
Dec 30 13:08:45.731: INFO: Created: latency-svc-7sgfn
Dec 30 13:08:45.751: INFO: Got endpoints: latency-svc-7sgfn [2.596013617s]
Dec 30 13:08:45.806: INFO: Created: latency-svc-rxnlp
Dec 30 13:08:45.921: INFO: Got endpoints: latency-svc-rxnlp [2.362514532s]
Dec 30 13:08:45.937: INFO: Created: latency-svc-9k6lb
Dec 30 13:08:45.956: INFO: Got endpoints: latency-svc-9k6lb [2.333714783s]
Dec 30 13:08:46.013: INFO: Created: latency-svc-45hft
Dec 30 13:08:46.197: INFO: Got endpoints: latency-svc-45hft [2.429687082s]
Dec 30 13:08:46.242: INFO: Created: latency-svc-g2l2s
Dec 30 13:08:46.264: INFO: Got endpoints: latency-svc-g2l2s [2.453952386s]
Dec 30 13:08:46.426: INFO: Created: latency-svc-mvrcm
Dec 30 13:08:46.436: INFO: Got endpoints: latency-svc-mvrcm [2.38194966s]
Dec 30 13:08:46.619: INFO: Created: latency-svc-69mz2
Dec 30 13:08:46.658: INFO: Got endpoints: latency-svc-69mz2 [2.395990055s]
Dec 30 13:08:46.912: INFO: Created: latency-svc-vxx96
Dec 30 13:08:46.943: INFO: Got endpoints: latency-svc-vxx96 [2.310249738s]
Dec 30 13:08:46.992: INFO: Created: latency-svc-87p2t
Dec 30 13:08:47.000: INFO: Got endpoints: latency-svc-87p2t [2.172495163s]
Dec 30 13:08:47.173: INFO: Created: latency-svc-5g9zg
Dec 30 13:08:47.173: INFO: Got endpoints: latency-svc-5g9zg [2.299016099s]
Dec 30 13:08:47.322: INFO: Created: latency-svc-xclwc
Dec 30 13:08:47.351: INFO: Got endpoints: latency-svc-xclwc [2.268329322s]
Dec 30 13:08:47.400: INFO: Created: latency-svc-s9fz9
Dec 30 13:08:47.405: INFO: Got endpoints: latency-svc-s9fz9 [2.106927919s]
Dec 30 13:08:47.598: INFO: Created: latency-svc-vflv4
Dec 30 13:08:47.638: INFO: Got endpoints: latency-svc-vflv4 [2.290877106s]
Dec 30 13:08:47.856: INFO: Created: latency-svc-fn572
Dec 30 13:08:47.897: INFO: Got endpoints: latency-svc-fn572 [2.538419344s]
Dec 30 13:08:47.903: INFO: Created: latency-svc-lqb9j
Dec 30 13:08:47.918: INFO: Got endpoints: latency-svc-lqb9j [2.394880222s]
Dec 30 13:08:48.077: INFO: Created: latency-svc-6vj7k
Dec 30 13:08:48.270: INFO: Got endpoints: latency-svc-6vj7k [2.518626759s]
Dec 30 13:08:48.302: INFO: Created: latency-svc-g7tvt
Dec 30 13:08:48.330: INFO: Got endpoints: latency-svc-g7tvt [2.408644754s]
Dec 30 13:08:48.539: INFO: Created: latency-svc-89zl4
Dec 30 13:08:48.571: INFO: Got endpoints: latency-svc-89zl4 [2.614789813s]
Dec 30 13:08:48.791: INFO: Created: latency-svc-9zmkb
Dec 30 13:08:48.878: INFO: Got endpoints: latency-svc-9zmkb [2.680070127s]
Dec 30 13:08:48.888: INFO: Created: latency-svc-jwmbh
Dec 30 13:08:49.184: INFO: Got endpoints: latency-svc-jwmbh [2.919685907s]
Dec 30 13:08:49.214: INFO: Created: latency-svc-9b646
Dec 30 13:08:49.339: INFO: Got endpoints: latency-svc-9b646 [2.902468247s]
Dec 30 13:08:49.374: INFO: Created: latency-svc-mzhkg
Dec 30 13:08:49.388: INFO: Got endpoints: latency-svc-mzhkg [2.730007151s]
Dec 30 13:08:49.440: INFO: Created: latency-svc-zk5rm
Dec 30 13:08:49.566: INFO: Got endpoints: latency-svc-zk5rm [2.622268169s]
Dec 30 13:08:49.586: INFO: Created: latency-svc-ghljl
Dec 30 13:08:49.602: INFO: Got endpoints: latency-svc-ghljl [2.601644533s]
Dec 30 13:08:49.744: INFO: Created: latency-svc-qhhxv
Dec 30 13:08:49.761: INFO: Got endpoints: latency-svc-qhhxv [2.587599395s]
Dec 30 13:08:49.834: INFO: Created: latency-svc-nf8tb
Dec 30 13:08:49.976: INFO: Got endpoints: latency-svc-nf8tb [2.624820927s]
Dec 30 13:08:50.014: INFO: Created: latency-svc-d4fb7
Dec 30 13:08:50.031: INFO: Got endpoints: latency-svc-d4fb7 [2.625429512s]
Dec 30 13:08:51.160: INFO: Created: latency-svc-qv8jv
Dec 30 13:08:51.182: INFO: Got endpoints: latency-svc-qv8jv [3.543989987s]
Dec 30 13:08:51.543: INFO: Created: latency-svc-87wdw
Dec 30 13:08:51.589: INFO: Got endpoints: latency-svc-87wdw [3.692214692s]
Dec 30 13:08:51.889: INFO: Created: latency-svc-g9crv
Dec 30 13:08:52.020: INFO: Got endpoints: latency-svc-g9crv [4.101620956s]
Dec 30 13:08:52.285: INFO: Created: latency-svc-jlcd5
Dec 30 13:08:52.291: INFO: Got endpoints: latency-svc-jlcd5 [4.020707189s]
Dec 30 13:08:52.386: INFO: Created: latency-svc-zp7pc
Dec 30 13:08:52.630: INFO: Got endpoints: latency-svc-zp7pc [4.300024973s]
Dec 30 13:08:52.646: INFO: Created: latency-svc-228kf
Dec 30 13:08:52.673: INFO: Got endpoints: latency-svc-228kf [4.100892532s]
Dec 30 13:08:52.937: INFO: Created: latency-svc-wxpz7
Dec 30 13:08:52.953: INFO: Got endpoints: latency-svc-wxpz7 [4.074610921s]
Dec 30 13:08:53.175: INFO: Created: latency-svc-w2dx2
Dec 30 13:08:53.185: INFO: Got endpoints: latency-svc-w2dx2 [4.000550839s]
Dec 30 13:08:53.242: INFO: Created: latency-svc-n2xkx
Dec 30 13:08:53.260: INFO: Got endpoints: latency-svc-n2xkx [3.920645239s]
Dec 30 13:08:53.506: INFO: Created: latency-svc-lqprh
Dec 30 13:08:53.524: INFO: Got endpoints: latency-svc-lqprh [4.136167093s]
Dec 30 13:08:53.835: INFO: Created: latency-svc-xkmrz
Dec 30 13:08:54.094: INFO: Got endpoints: latency-svc-xkmrz [4.528128655s]
Dec 30 13:08:54.361: INFO: Created: latency-svc-2gbjf
Dec 30 13:08:54.362: INFO: Got endpoints: latency-svc-2gbjf [4.760227016s]
Dec 30 13:08:54.572: INFO: Created: latency-svc-mx6br
Dec 30 13:08:54.761: INFO: Got endpoints: latency-svc-mx6br [5.0000048s]
Dec 30 13:08:54.765: INFO: Created: latency-svc-6f8l9
Dec 30 13:08:54.795: INFO: Got endpoints: latency-svc-6f8l9 [4.819550648s]
Dec 30 13:08:54.960: INFO: Created: latency-svc-z7vff
Dec 30 13:08:54.998: INFO: Got endpoints: latency-svc-z7vff [4.966642561s]
Dec 30 13:08:55.136: INFO: Created: latency-svc-76fnx
Dec 30 13:08:55.395: INFO: Got endpoints: latency-svc-76fnx [4.212636624s]
Dec 30 13:08:55.438: INFO: Created: latency-svc-jhpzg
Dec 30 13:08:55.438: INFO: Got endpoints: latency-svc-jhpzg [3.84857558s]
Dec 30 13:08:55.568: INFO: Created: latency-svc-5kwnv
Dec 30 13:08:55.584: INFO: Got endpoints: latency-svc-5kwnv [3.563641933s]
Dec 30 13:08:55.761: INFO: Created: latency-svc-t8gmg
Dec 30 13:08:55.799: INFO: Got endpoints: latency-svc-t8gmg [3.508412249s]
Dec 30 13:08:55.974: INFO: Created: latency-svc-kcgh7
Dec 30 13:08:56.027: INFO: Got endpoints: latency-svc-kcgh7 [3.396217704s]
Dec 30 13:08:56.060: INFO: Created: latency-svc-bb8f6
Dec 30 13:08:56.223: INFO: Got endpoints: latency-svc-bb8f6 [3.549987693s]
Dec 30 13:08:56.288: INFO: Created: latency-svc-hglx6
Dec 30 13:08:56.288: INFO: Got endpoints: latency-svc-hglx6 [3.335193436s]
Dec 30 13:08:56.537: INFO: Created: latency-svc-cndb5
Dec 30 13:08:56.559: INFO: Got endpoints: latency-svc-cndb5 [3.373650027s]
Dec 30 13:08:56.774: INFO: Created: latency-svc-87rtm
Dec 30 13:08:56.894: INFO: Got endpoints: latency-svc-87rtm [3.633934563s]
Dec 30 13:08:57.266: INFO: Created: latency-svc-57ckg
Dec 30 13:08:57.576: INFO: Got endpoints: latency-svc-57ckg [4.051534369s]
Dec 30 13:08:57.632: INFO: Created: latency-svc-2hcrx
Dec 30 13:08:57.850: INFO: Got endpoints: latency-svc-2hcrx [3.755924066s]
Dec 30 13:08:57.896: INFO: Created: latency-svc-8wxs7
Dec 30 13:08:57.928: INFO: Got endpoints: latency-svc-8wxs7 [3.565620297s]
Dec 30 13:08:58.106: INFO: Created: latency-svc-h7vkr
Dec 30 13:08:58.146: INFO: Got endpoints: latency-svc-h7vkr [3.384210379s]
Dec 30 13:08:58.445: INFO: Created: latency-svc-98xz2
Dec 30 13:08:58.478: INFO: Got endpoints: latency-svc-98xz2 [3.682282218s]
Dec 30 13:08:58.586: INFO: Created: latency-svc-km79s
Dec 30 13:08:58.628: INFO: Got endpoints: latency-svc-km79s [3.630615567s]
Dec 30 13:08:58.771: INFO: Created: latency-svc-9t9qm
Dec 30 13:08:58.784: INFO: Got endpoints: latency-svc-9t9qm [3.38847275s]
Dec 30 13:08:58.947: INFO: Created: latency-svc-2cvnh
Dec 30 13:08:58.954: INFO: Got endpoints: latency-svc-2cvnh [3.516382453s]
Dec 30 13:08:59.033: INFO: Created: latency-svc-sqgsw
Dec 30 13:08:59.161: INFO: Got endpoints: latency-svc-sqgsw [3.577361448s]
Dec 30 13:08:59.177: INFO: Created: latency-svc-n8j6n
Dec 30 13:08:59.180: INFO: Got endpoints: latency-svc-n8j6n [3.380924534s]
Dec 30 13:08:59.236: INFO: Created: latency-svc-7sgpr
Dec 30 13:08:59.240: INFO: Got endpoints: latency-svc-7sgpr [3.213724422s]
Dec 30 13:08:59.375: INFO: Created: latency-svc-ffwpr
Dec 30 13:08:59.397: INFO: Got endpoints: latency-svc-ffwpr [3.174235203s]
Dec 30 13:08:59.465: INFO: Created: latency-svc-f92xf
Dec 30 13:08:59.465: INFO: Got endpoints: latency-svc-f92xf [3.177416883s]
Dec 30 13:08:59.599: INFO: Created: latency-svc-zhbfl
Dec 30 13:08:59.625: INFO: Got endpoints: latency-svc-zhbfl [3.066189404s]
Dec 30 13:08:59.805: INFO: Created: latency-svc-wrbrm
Dec 30 13:08:59.836: INFO: Got endpoints: latency-svc-wrbrm [2.94224093s]
Dec 30 13:08:59.884: INFO: Created: latency-svc-lcm5d
Dec 30 13:09:00.013: INFO: Got endpoints: latency-svc-lcm5d [2.436915877s]
Dec 30 13:09:00.021: INFO: Created: latency-svc-gls66
Dec 30 13:09:00.054: INFO: Got endpoints: latency-svc-gls66 [2.203809634s]
Dec 30 13:09:00.363: INFO: Created: latency-svc-5mnv6
Dec 30 13:09:00.631: INFO: Got endpoints: latency-svc-5mnv6 [2.702333086s]
Dec 30 13:09:00.688: INFO: Created: latency-svc-dlz2l
Dec 30 13:09:00.847: INFO: Got endpoints: latency-svc-dlz2l [2.701098447s]
Dec 30 13:09:00.876: INFO: Created: latency-svc-4rbvd
Dec 30 13:09:00.881: INFO: Got endpoints: latency-svc-4rbvd [2.403037918s]
Dec 30 13:09:01.059: INFO: Created: latency-svc-nfh4k
Dec 30 13:09:01.131: INFO: Created: latency-svc-bx2sx
Dec 30 13:09:01.139: INFO: Got endpoints: latency-svc-nfh4k [2.509934479s]
Dec 30 13:09:01.290: INFO: Got endpoints: latency-svc-bx2sx [2.505653123s]
Dec 30 13:09:01.321: INFO: Created: latency-svc-v9sfk
Dec 30 13:09:01.329: INFO: Got endpoints: latency-svc-v9sfk [2.374648754s]
Dec 30 13:09:01.511: INFO: Created: latency-svc-644kt
Dec 30 13:09:01.527: INFO: Got endpoints: latency-svc-644kt [2.365171147s]
Dec 30 13:09:01.603: INFO: Created: latency-svc-kxzn7
Dec 30 13:09:01.860: INFO: Got endpoints: latency-svc-kxzn7 [2.679565047s]
Dec 30 13:09:01.897: INFO: Created: latency-svc-xm42d
Dec 30 13:09:01.897: INFO: Got endpoints: latency-svc-xm42d [2.656626373s]
Dec 30 13:09:02.064: INFO: Created: latency-svc-ht2vn
Dec 30 13:09:02.075: INFO: Got endpoints: latency-svc-ht2vn [2.677295549s]
Dec 30 13:09:02.143: INFO: Created: latency-svc-brrch
Dec 30 13:09:02.155: INFO: Got endpoints: latency-svc-brrch [2.689544487s]
Dec 30 13:09:02.379: INFO: Created: latency-svc-l2jb5
Dec 30 13:09:02.405: INFO: Got endpoints: latency-svc-l2jb5 [2.779950922s]
Dec 30 13:09:02.653: INFO: Created: latency-svc-zkrhr
Dec 30 13:09:02.682: INFO: Got endpoints: latency-svc-zkrhr [2.845104996s]
Dec 30 13:09:02.759: INFO: Created: latency-svc-klmh5
Dec 30 13:09:02.952: INFO: Got endpoints: latency-svc-klmh5 [2.938234027s]
Dec 30 13:09:03.005: INFO: Created: latency-svc-8hbkq
Dec 30 13:09:03.027: INFO: Got endpoints: latency-svc-8hbkq [2.972797786s]
Dec 30 13:09:03.231: INFO: Created: latency-svc-xs8wd
Dec 30 13:09:03.232: INFO: Got endpoints: latency-svc-xs8wd [2.601298419s]
Dec 30 13:09:03.460: INFO: Created: latency-svc-sd9rc
Dec 30 13:09:03.499: INFO: Got endpoints: latency-svc-sd9rc [2.652488582s]
Dec 30 13:09:03.654: INFO: Created: latency-svc-2l7p4
Dec 30 13:09:03.732: INFO: Got endpoints: latency-svc-2l7p4 [2.850730113s]
Dec 30 13:09:03.743: INFO: Created: latency-svc-2djvc
Dec 30 13:09:03.858: INFO: Got endpoints: latency-svc-2djvc [2.719348263s]
Dec 30 13:09:04.598: INFO: Created: latency-svc-tm66c
Dec 30 13:09:04.792: INFO: Got endpoints: latency-svc-tm66c [3.502172003s]
Dec 30 13:09:04.797: INFO: Created: latency-svc-cj77k
Dec 30 13:09:04.820: INFO: Got endpoints: latency-svc-cj77k [3.490706757s]
Dec 30 13:09:05.044: INFO: Created: latency-svc-pw2pw
Dec 30 13:09:05.080: INFO: Got endpoints: latency-svc-pw2pw [3.553470623s]
Dec 30 13:09:05.277: INFO: Created: latency-svc-sn9gc
Dec 30 13:09:05.303: INFO: Got endpoints: latency-svc-sn9gc [3.44273794s]
Dec 30 13:09:05.344: INFO: Created: latency-svc-26zfs
Dec 30 13:09:05.494: INFO: Got endpoints: latency-svc-26zfs [3.596733398s]
Dec 30 13:09:05.504: INFO: Created: latency-svc-mmfjq
Dec 30 13:09:05.530: INFO: Got endpoints: latency-svc-mmfjq [3.455275962s]
Dec 30 13:09:05.756: INFO: Created: latency-svc-sv5jk
Dec 30 13:09:05.769: INFO: Got endpoints: latency-svc-sv5jk [3.613783873s]
Dec 30 13:09:05.833: INFO: Created: latency-svc-wh2gv
Dec 30 13:09:05.969: INFO: Got endpoints: latency-svc-wh2gv [3.563623464s]
Dec 30 13:09:06.013: INFO: Created: latency-svc-c5grc
Dec 30 13:09:06.018: INFO: Got endpoints: latency-svc-c5grc [3.336771335s]
Dec 30 13:09:06.018: INFO: Latencies: [228.122106ms 398.316863ms 421.843948ms 624.241688ms 679.051233ms 941.450932ms 1.11520358s 1.28452185s 1.541203158s 1.74903439s 1.749249119s 1.931232039s 2.088765443s 2.106927919s 2.123819941s 2.172495163s 2.203809634s 2.268329322s 2.290877106s 2.299016099s 2.310249738s 2.333714783s 2.354029899s 2.362514532s 2.365171147s 2.368234283s 2.374648754s 2.38194966s 2.386715955s 2.394880222s 2.395990055s 2.396985562s 2.403037918s 2.40615683s 2.408644754s 2.429687082s 2.436915877s 2.453952386s 2.505653123s 2.509934479s 2.518626759s 2.538419344s 2.569334705s 2.570955699s 2.587599395s 2.589316169s 2.593988632s 2.596013617s 2.601298419s 2.601644533s 2.60821625s 2.614789813s 2.622268169s 2.624820927s 2.625429512s 2.652488582s 2.656626373s 2.677295549s 2.679565047s 2.680070127s 2.689544487s 2.701098447s 2.702333086s 2.719348263s 2.730007151s 2.779950922s 2.79146142s 2.845104996s 2.848939487s 2.850730113s 2.902468247s 2.907289746s 2.910948828s 2.919685907s 2.938234027s 2.94224093s 2.944189157s 2.972797786s 3.006678406s 3.011480395s 3.02340895s 3.047274793s 3.066189404s 3.074262666s 3.092041975s 3.094617579s 3.106590257s 3.138084228s 3.160623127s 3.170009208s 3.174235203s 3.177416883s 3.180407205s 3.21149066s 3.213724422s 3.222901441s 3.249612085s 3.256206919s 3.2985028s 3.308842809s 3.331197404s 3.335193436s 3.336771335s 3.347946106s 3.359621333s 3.365454795s 3.366061924s 3.373650027s 3.380924534s 3.384210379s 3.38796536s 3.38847275s 3.392848848s 3.396217704s 3.412461067s 3.414293539s 3.437946837s 3.44273794s 3.44540558s 3.44686454s 3.447177845s 3.447220358s 3.449326733s 3.455275962s 3.458155393s 3.490706757s 3.494252589s 3.502172003s 3.508412249s 3.516163189s 3.516382453s 3.521481684s 3.543989987s 3.549987693s 3.553470623s 3.555577849s 3.559484407s 3.563623464s 3.563641933s 3.565620297s 3.574911526s 3.577361448s 3.578232828s 3.587322221s 3.596733398s 3.605626634s 3.606649118s 3.613783873s 3.630615567s 3.632353302s 3.633934563s 3.646704852s 3.673580013s 3.677223385s 3.682282218s 3.684794865s 3.690945176s 3.692214692s 3.707962545s 3.736937192s 3.755924066s 3.798617747s 3.84857558s 3.868208509s 3.886083314s 3.888619006s 3.920645239s 3.922852753s 3.942477967s 3.947416233s 3.960312428s 3.968430284s 3.96973128s 4.000550839s 4.020707189s 4.051534369s 4.052867537s 4.063662801s 4.074610921s 4.100892532s 4.101620956s 4.11447857s 4.127824396s 4.136167093s 4.174508313s 4.183783599s 4.212636624s 4.214680446s 4.254080358s 4.291188667s 4.296050091s 4.300024973s 4.307194649s 4.450413452s 4.467971877s 4.528128655s 4.760227016s 4.819550648s 4.966642561s 5.0000048s]
Dec 30 13:09:06.019: INFO: 50 %ile: 3.331197404s
Dec 30 13:09:06.019: INFO: 90 %ile: 4.101620956s
Dec 30 13:09:06.019: INFO: 99 %ile: 4.966642561s
Dec 30 13:09:06.019: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:09:06.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-nztk2" for this suite.
Dec 30 13:10:22.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:10:22.302: INFO: namespace: e2e-tests-svc-latency-nztk2, resource: bindings, ignored listing per whitelist
Dec 30 13:10:22.437: INFO: namespace e2e-tests-svc-latency-nztk2 deletion completed in 1m16.379990034s

• [SLOW TEST:129.377 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:10:22.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 13:10:22.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-nv4cs'
Dec 30 13:10:25.390: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 13:10:25.390: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 30 13:10:27.511: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4cd5n]
Dec 30 13:10:27.511: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4cd5n" in namespace "e2e-tests-kubectl-nv4cs" to be "running and ready"
Dec 30 13:10:28.088: INFO: Pod "e2e-test-nginx-rc-4cd5n": Phase="Pending", Reason="", readiness=false. Elapsed: 577.19449ms
Dec 30 13:10:30.103: INFO: Pod "e2e-test-nginx-rc-4cd5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.592255317s
Dec 30 13:10:32.470: INFO: Pod "e2e-test-nginx-rc-4cd5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.959316105s
Dec 30 13:10:34.495: INFO: Pod "e2e-test-nginx-rc-4cd5n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.984274275s
Dec 30 13:10:36.539: INFO: Pod "e2e-test-nginx-rc-4cd5n": Phase="Running", Reason="", readiness=true. Elapsed: 9.027517505s
Dec 30 13:10:36.539: INFO: Pod "e2e-test-nginx-rc-4cd5n" satisfied condition "running and ready"
Dec 30 13:10:36.539: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4cd5n]
Dec 30 13:10:36.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nv4cs'
Dec 30 13:10:36.837: INFO: stderr: ""
Dec 30 13:10:36.837: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 30 13:10:36.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nv4cs'
Dec 30 13:10:36.962: INFO: stderr: ""
Dec 30 13:10:36.962: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:10:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nv4cs" for this suite.
Dec 30 13:10:59.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:10:59.199: INFO: namespace: e2e-tests-kubectl-nv4cs, resource: bindings, ignored listing per whitelist
Dec 30 13:10:59.208: INFO: namespace e2e-tests-kubectl-nv4cs deletion completed in 22.237234048s

• [SLOW TEST:36.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:10:59.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-fdbjk
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 30 13:10:59.541: INFO: Found 0 stateful pods, waiting for 3
Dec 30 13:11:09.581: INFO: Found 2 stateful pods, waiting for 3
Dec 30 13:11:19.623: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:19.623: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:19.623: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 13:11:29.599: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:29.599: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:29.599: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 30 13:11:39.566: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:39.566: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:11:39.566: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 30 13:11:39.632: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 30 13:11:49.715: INFO: Updating stateful set ss2
Dec 30 13:11:49.733: INFO: Waiting for Pod e2e-tests-statefulset-fdbjk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 13:11:59.767: INFO: Waiting for Pod e2e-tests-statefulset-fdbjk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 30 13:12:12.416: INFO: Found 2 stateful pods, waiting for 3
Dec 30 13:12:22.566: INFO: Found 2 stateful pods, waiting for 3
Dec 30 13:12:32.493: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:12:32.493: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:12:32.493: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 13:12:42.433: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:12:42.433: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 13:12:42.433: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 30 13:12:42.651: INFO: Updating stateful set ss2
Dec 30 13:12:42.720: INFO: Waiting for Pod e2e-tests-statefulset-fdbjk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 13:12:53.073: INFO: Updating stateful set ss2
Dec 30 13:12:53.282: INFO: Waiting for StatefulSet e2e-tests-statefulset-fdbjk/ss2 to complete update
Dec 30 13:12:53.282: INFO: Waiting for Pod e2e-tests-statefulset-fdbjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 13:13:03.403: INFO: Waiting for StatefulSet e2e-tests-statefulset-fdbjk/ss2 to complete update
Dec 30 13:13:03.403: INFO: Waiting for Pod e2e-tests-statefulset-fdbjk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 13:13:13.467: INFO: Waiting for StatefulSet e2e-tests-statefulset-fdbjk/ss2 to complete update
Dec 30 13:13:23.315: INFO: Waiting for StatefulSet e2e-tests-statefulset-fdbjk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 30 13:13:33.322: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fdbjk
Dec 30 13:13:33.332: INFO: Scaling statefulset ss2 to 0
Dec 30 13:14:03.389: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 13:14:03.451: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:14:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-fdbjk" for this suite.
Dec 30 13:14:11.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:14:11.854: INFO: namespace: e2e-tests-statefulset-fdbjk, resource: bindings, ignored listing per whitelist
Dec 30 13:14:11.967: INFO: namespace e2e-tests-statefulset-fdbjk deletion completed in 8.422024771s

• [SLOW TEST:192.759 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:14:11.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 30 13:14:12.195: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:14:30.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-s29xq" for this suite.
Dec 30 13:14:36.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:14:36.629: INFO: namespace: e2e-tests-init-container-s29xq, resource: bindings, ignored listing per whitelist
Dec 30 13:14:36.662: INFO: namespace e2e-tests-init-container-s29xq deletion completed in 6.314075258s

• [SLOW TEST:24.694 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:14:36.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 30 13:14:36.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005" in namespace "e2e-tests-projected-gz8j8" to be "success or failure"
Dec 30 13:14:36.958: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.287739ms
Dec 30 13:14:39.083: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161314582s
Dec 30 13:14:41.108: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185708577s
Dec 30 13:14:43.122: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200238827s
Dec 30 13:14:45.141: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21887903s
Dec 30 13:14:47.155: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2328985s
STEP: Saw pod success
Dec 30 13:14:47.155: INFO: Pod "downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 13:14:47.162: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005 container client-container: 
STEP: delete the pod
Dec 30 13:14:47.330: INFO: Waiting for pod downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005 to disappear
Dec 30 13:14:47.362: INFO: Pod downwardapi-volume-54198923-2b06-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:14:47.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gz8j8" for this suite.
Dec 30 13:14:53.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:14:53.581: INFO: namespace: e2e-tests-projected-gz8j8, resource: bindings, ignored listing per whitelist
Dec 30 13:14:53.725: INFO: namespace e2e-tests-projected-gz8j8 deletion completed in 6.344229981s

• [SLOW TEST:17.063 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:14:53.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-wgbh9/configmap-test-5e4f37a5-2b06-11ea-8970-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 30 13:14:54.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005" in namespace "e2e-tests-configmap-wgbh9" to be "success or failure"
Dec 30 13:14:54.155: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.266927ms
Dec 30 13:14:56.167: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042016073s
Dec 30 13:14:58.200: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074303467s
Dec 30 13:15:00.623: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49791813s
Dec 30 13:15:02.679: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55381498s
Dec 30 13:15:04.702: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.576923885s
STEP: Saw pod success
Dec 30 13:15:04.702: INFO: Pod "pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005" satisfied condition "success or failure"
Dec 30 13:15:04.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005 container env-test: 
STEP: delete the pod
Dec 30 13:15:04.899: INFO: Waiting for pod pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005 to disappear
Dec 30 13:15:05.077: INFO: Pod pod-configmaps-5e59aac8-2b06-11ea-8970-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:15:05.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wgbh9" for this suite.
Dec 30 13:15:11.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:15:11.242: INFO: namespace: e2e-tests-configmap-wgbh9, resource: bindings, ignored listing per whitelist
Dec 30 13:15:11.318: INFO: namespace e2e-tests-configmap-wgbh9 deletion completed in 6.228291387s

• [SLOW TEST:17.593 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:15:11.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 30 13:15:11.560: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 30 13:15:11.590: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 30 13:15:17.722: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 30 13:15:21.833: INFO: Creating deployment "test-rolling-update-deployment"
Dec 30 13:15:21.902: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 30 13:15:21.946: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 30 13:15:23.972: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 30 13:15:23.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308521, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 13:15:25.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308521, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 13:15:28.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308521, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 13:15:29.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308522, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308521, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 13:15:32.012: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 30 13:15:32.045: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-gf5xq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gf5xq/deployments/test-rolling-update-deployment,UID:6ee392c2-2b06-11ea-a994-fa163e34d433,ResourceVersion:16579433,Generation:1,CreationTimestamp:2019-12-30 13:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-30 13:15:22 +0000 UTC 2019-12-30 13:15:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-30 13:15:30 +0000 UTC 2019-12-30 13:15:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 30 13:15:32.070: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-gf5xq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gf5xq/replicasets/test-rolling-update-deployment-75db98fb4c,UID:6ef5c0dd-2b06-11ea-a994-fa163e34d433,ResourceVersion:16579424,Generation:1,CreationTimestamp:2019-12-30 13:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ee392c2-2b06-11ea-a994-fa163e34d433 0xc002256f37 0xc002256f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 30 13:15:32.070: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 30 13:15:32.070: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-gf5xq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gf5xq/replicasets/test-rolling-update-controller,UID:68c2bd51-2b06-11ea-a994-fa163e34d433,ResourceVersion:16579432,Generation:2,CreationTimestamp:2019-12-30 13:15:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ee392c2-2b06-11ea-a994-fa163e34d433 0xc002256dd7 0xc002256dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 13:15:32.085: INFO: Pod "test-rolling-update-deployment-75db98fb4c-fnl84" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-fnl84,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-gf5xq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gf5xq/pods/test-rolling-update-deployment-75db98fb4c-fnl84,UID:6f025f1d-2b06-11ea-a994-fa163e34d433,ResourceVersion:16579423,Generation:0,CreationTimestamp:2019-12-30 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 6ef5c0dd-2b06-11ea-a994-fa163e34d433 0xc0026b1a57 0xc0026b1a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4fms4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4fms4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4fms4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b1af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b1b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:15:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:15:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:15:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:15:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-30 13:15:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-30 13:15:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://590e487ac73887b71b357066ee537f3a26ed105baf8aa5cfa92394bc68eec000}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:15:32.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gf5xq" for this suite.
Dec 30 13:15:40.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:15:40.825: INFO: namespace: e2e-tests-deployment-gf5xq, resource: bindings, ignored listing per whitelist
Dec 30 13:15:41.227: INFO: namespace e2e-tests-deployment-gf5xq deletion completed in 9.099184503s

• [SLOW TEST:29.909 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 30 13:15:41.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 13:15:41.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6ncs5'
Dec 30 13:15:41.767: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 13:15:41.767: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 30 13:15:43.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6ncs5'
Dec 30 13:15:44.421: INFO: stderr: ""
Dec 30 13:15:44.421: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 30 13:15:44.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6ncs5" for this suite.
Dec 30 13:15:50.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:15:50.985: INFO: namespace: e2e-tests-kubectl-6ncs5, resource: bindings, ignored listing per whitelist
Dec 30 13:15:51.002: INFO: namespace e2e-tests-kubectl-6ncs5 deletion completed in 6.554045921s

• [SLOW TEST:9.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSDec 30 13:15:51.003: INFO: Running AfterSuite actions on all nodes
Dec 30 13:15:51.003: INFO: Running AfterSuite actions on node 1
Dec 30 13:15:51.003: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8924.837 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS